Test Report: KVM_Linux_crio 20090

                    
                      20ecd3658b86897ae797acf630cebadf77816c63:2024-12-13:37470
                    
                

Test fail (10/326)

x
+
TestAddons/parallel/Ingress (153.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-649719 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-649719 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-649719 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [80f99f80-07c5-4365-88c6-8a2b2e3453d1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [80f99f80-07c5-4365-88c6-8a2b2e3453d1] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.00826332s
I1213 19:06:09.013841   19544 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-649719 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-649719 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.099477903s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-649719 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-649719 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.191
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-649719 -n addons-649719
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-649719 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-649719 logs -n 25: (1.170917163s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC | 13 Dec 24 19:02 UTC |
	| delete  | -p download-only-202348                                                                     | download-only-202348 | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC | 13 Dec 24 19:02 UTC |
	| delete  | -p download-only-541042                                                                     | download-only-541042 | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC | 13 Dec 24 19:02 UTC |
	| delete  | -p download-only-202348                                                                     | download-only-202348 | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC | 13 Dec 24 19:02 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-148435 | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC |                     |
	|         | binary-mirror-148435                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44529                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-148435                                                                     | binary-mirror-148435 | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC | 13 Dec 24 19:02 UTC |
	| addons  | enable dashboard -p                                                                         | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC |                     |
	|         | addons-649719                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC |                     |
	|         | addons-649719                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-649719 --wait=true                                                                | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC | 13 Dec 24 19:04 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-649719 addons disable                                                                | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:04 UTC | 13 Dec 24 19:04 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-649719 addons disable                                                                | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:04 UTC | 13 Dec 24 19:05 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | -p addons-649719                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-649719 addons                                                                        | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-649719 addons                                                                        | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-649719 ssh cat                                                                       | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | /opt/local-path-provisioner/pvc-71c31fc0-8ce0-4c6c-8d89-dc3684024ee5_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-649719 addons disable                                                                | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:06 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-649719 ip                                                                            | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	| addons  | addons-649719 addons disable                                                                | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-649719 addons disable                                                                | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-649719 addons                                                                        | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-649719 addons disable                                                                | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-649719 ssh curl -s                                                                   | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:06 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-649719 addons                                                                        | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:06 UTC | 13 Dec 24 19:06 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-649719 addons                                                                        | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:06 UTC | 13 Dec 24 19:06 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-649719 ip                                                                            | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:08 UTC | 13 Dec 24 19:08 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 19:02:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 19:02:30.144524   20291 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:02:30.144742   20291 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:02:30.144750   20291 out.go:358] Setting ErrFile to fd 2...
	I1213 19:02:30.144754   20291 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:02:30.144930   20291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
	I1213 19:02:30.145500   20291 out.go:352] Setting JSON to false
	I1213 19:02:30.146330   20291 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2693,"bootTime":1734113857,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 19:02:30.146387   20291 start.go:139] virtualization: kvm guest
	I1213 19:02:30.148317   20291 out.go:177] * [addons-649719] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 19:02:30.149556   20291 notify.go:220] Checking for updates...
	I1213 19:02:30.149582   20291 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 19:02:30.150973   20291 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:02:30.152093   20291 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 19:02:30.153259   20291 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 19:02:30.154324   20291 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 19:02:30.155391   20291 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:02:30.156585   20291 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 19:02:30.186528   20291 out.go:177] * Using the kvm2 driver based on user configuration
	I1213 19:02:30.187565   20291 start.go:297] selected driver: kvm2
	I1213 19:02:30.187588   20291 start.go:901] validating driver "kvm2" against <nil>
	I1213 19:02:30.187600   20291 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:02:30.188253   20291 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:02:30.188327   20291 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20090-12353/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 19:02:30.201803   20291 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1213 19:02:30.201866   20291 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 19:02:30.202150   20291 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 19:02:30.202194   20291 cni.go:84] Creating CNI manager for ""
	I1213 19:02:30.202261   20291 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 19:02:30.202271   20291 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 19:02:30.202342   20291 start.go:340] cluster config:
	{Name:addons-649719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-649719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:02:30.202471   20291 iso.go:125] acquiring lock: {Name:mkd84f6661a5214d8c2d3a40ad448351a88bfd1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:02:30.203909   20291 out.go:177] * Starting "addons-649719" primary control-plane node in "addons-649719" cluster
	I1213 19:02:30.204945   20291 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:02:30.204986   20291 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1213 19:02:30.204999   20291 cache.go:56] Caching tarball of preloaded images
	I1213 19:02:30.205084   20291 preload.go:172] Found /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 19:02:30.205098   20291 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1213 19:02:30.205615   20291 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/config.json ...
	I1213 19:02:30.205653   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/config.json: {Name:mkd6f73573a3e1c86cfde6319719ff7b523c616e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:30.205835   20291 start.go:360] acquireMachinesLock for addons-649719: {Name:mkc278ae0927dbec7538ca4f7c13001e5f3abc49 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 19:02:30.205900   20291 start.go:364] duration metric: took 46.771µs to acquireMachinesLock for "addons-649719"
	I1213 19:02:30.205929   20291 start.go:93] Provisioning new machine with config: &{Name:addons-649719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-649719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:02:30.205982   20291 start.go:125] createHost starting for "" (driver="kvm2")
	I1213 19:02:30.207434   20291 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1213 19:02:30.207573   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:02:30.207610   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:02:30.220765   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39059
	I1213 19:02:30.221144   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:02:30.221689   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:02:30.221709   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:02:30.222146   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:02:30.222325   20291 main.go:141] libmachine: (addons-649719) Calling .GetMachineName
	I1213 19:02:30.222469   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:02:30.222627   20291 start.go:159] libmachine.API.Create for "addons-649719" (driver="kvm2")
	I1213 19:02:30.222655   20291 client.go:168] LocalClient.Create starting
	I1213 19:02:30.222695   20291 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem
	I1213 19:02:30.561087   20291 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem
	I1213 19:02:30.714120   20291 main.go:141] libmachine: Running pre-create checks...
	I1213 19:02:30.714142   20291 main.go:141] libmachine: (addons-649719) Calling .PreCreateCheck
	I1213 19:02:30.714607   20291 main.go:141] libmachine: (addons-649719) Calling .GetConfigRaw
	I1213 19:02:30.715053   20291 main.go:141] libmachine: Creating machine...
	I1213 19:02:30.715078   20291 main.go:141] libmachine: (addons-649719) Calling .Create
	I1213 19:02:30.715269   20291 main.go:141] libmachine: (addons-649719) creating KVM machine...
	I1213 19:02:30.715287   20291 main.go:141] libmachine: (addons-649719) creating network...
	I1213 19:02:30.716552   20291 main.go:141] libmachine: (addons-649719) DBG | found existing default KVM network
	I1213 19:02:30.717212   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:30.717052   20314 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b80}
	I1213 19:02:30.717239   20291 main.go:141] libmachine: (addons-649719) DBG | created network xml: 
	I1213 19:02:30.717256   20291 main.go:141] libmachine: (addons-649719) DBG | <network>
	I1213 19:02:30.717264   20291 main.go:141] libmachine: (addons-649719) DBG |   <name>mk-addons-649719</name>
	I1213 19:02:30.717272   20291 main.go:141] libmachine: (addons-649719) DBG |   <dns enable='no'/>
	I1213 19:02:30.717278   20291 main.go:141] libmachine: (addons-649719) DBG |   
	I1213 19:02:30.717288   20291 main.go:141] libmachine: (addons-649719) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1213 19:02:30.717296   20291 main.go:141] libmachine: (addons-649719) DBG |     <dhcp>
	I1213 19:02:30.717305   20291 main.go:141] libmachine: (addons-649719) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1213 19:02:30.717311   20291 main.go:141] libmachine: (addons-649719) DBG |     </dhcp>
	I1213 19:02:30.717317   20291 main.go:141] libmachine: (addons-649719) DBG |   </ip>
	I1213 19:02:30.717325   20291 main.go:141] libmachine: (addons-649719) DBG |   
	I1213 19:02:30.717353   20291 main.go:141] libmachine: (addons-649719) DBG | </network>
	I1213 19:02:30.717373   20291 main.go:141] libmachine: (addons-649719) DBG | 
	I1213 19:02:30.722555   20291 main.go:141] libmachine: (addons-649719) DBG | trying to create private KVM network mk-addons-649719 192.168.39.0/24...
	I1213 19:02:30.787750   20291 main.go:141] libmachine: (addons-649719) DBG | private KVM network mk-addons-649719 192.168.39.0/24 created
	I1213 19:02:30.787786   20291 main.go:141] libmachine: (addons-649719) setting up store path in /home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719 ...
	I1213 19:02:30.787804   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:30.787711   20314 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 19:02:30.787889   20291 main.go:141] libmachine: (addons-649719) building disk image from file:///home/jenkins/minikube-integration/20090-12353/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso
	I1213 19:02:30.787984   20291 main.go:141] libmachine: (addons-649719) Downloading /home/jenkins/minikube-integration/20090-12353/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20090-12353/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso...
	I1213 19:02:31.060741   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:31.060641   20314 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa...
	I1213 19:02:31.322326   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:31.322172   20314 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/addons-649719.rawdisk...
	I1213 19:02:31.322365   20291 main.go:141] libmachine: (addons-649719) DBG | Writing magic tar header
	I1213 19:02:31.322405   20291 main.go:141] libmachine: (addons-649719) DBG | Writing SSH key tar header
	I1213 19:02:31.322443   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:31.322314   20314 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719 ...
	I1213 19:02:31.322475   20291 main.go:141] libmachine: (addons-649719) setting executable bit set on /home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719 (perms=drwx------)
	I1213 19:02:31.322497   20291 main.go:141] libmachine: (addons-649719) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719
	I1213 19:02:31.322508   20291 main.go:141] libmachine: (addons-649719) setting executable bit set on /home/jenkins/minikube-integration/20090-12353/.minikube/machines (perms=drwxr-xr-x)
	I1213 19:02:31.322519   20291 main.go:141] libmachine: (addons-649719) setting executable bit set on /home/jenkins/minikube-integration/20090-12353/.minikube (perms=drwxr-xr-x)
	I1213 19:02:31.322525   20291 main.go:141] libmachine: (addons-649719) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20090-12353/.minikube/machines
	I1213 19:02:31.322531   20291 main.go:141] libmachine: (addons-649719) setting executable bit set on /home/jenkins/minikube-integration/20090-12353 (perms=drwxrwxr-x)
	I1213 19:02:31.322541   20291 main.go:141] libmachine: (addons-649719) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1213 19:02:31.322554   20291 main.go:141] libmachine: (addons-649719) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1213 19:02:31.322566   20291 main.go:141] libmachine: (addons-649719) creating domain...
	I1213 19:02:31.322579   20291 main.go:141] libmachine: (addons-649719) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 19:02:31.322592   20291 main.go:141] libmachine: (addons-649719) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20090-12353
	I1213 19:02:31.322605   20291 main.go:141] libmachine: (addons-649719) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1213 19:02:31.322625   20291 main.go:141] libmachine: (addons-649719) DBG | checking permissions on dir: /home/jenkins
	I1213 19:02:31.322644   20291 main.go:141] libmachine: (addons-649719) DBG | checking permissions on dir: /home
	I1213 19:02:31.322656   20291 main.go:141] libmachine: (addons-649719) DBG | skipping /home - not owner
	I1213 19:02:31.323542   20291 main.go:141] libmachine: (addons-649719) define libvirt domain using xml: 
	I1213 19:02:31.323557   20291 main.go:141] libmachine: (addons-649719) <domain type='kvm'>
	I1213 19:02:31.323567   20291 main.go:141] libmachine: (addons-649719)   <name>addons-649719</name>
	I1213 19:02:31.323575   20291 main.go:141] libmachine: (addons-649719)   <memory unit='MiB'>4000</memory>
	I1213 19:02:31.323588   20291 main.go:141] libmachine: (addons-649719)   <vcpu>2</vcpu>
	I1213 19:02:31.323596   20291 main.go:141] libmachine: (addons-649719)   <features>
	I1213 19:02:31.323609   20291 main.go:141] libmachine: (addons-649719)     <acpi/>
	I1213 19:02:31.323619   20291 main.go:141] libmachine: (addons-649719)     <apic/>
	I1213 19:02:31.323629   20291 main.go:141] libmachine: (addons-649719)     <pae/>
	I1213 19:02:31.323645   20291 main.go:141] libmachine: (addons-649719)     
	I1213 19:02:31.323656   20291 main.go:141] libmachine: (addons-649719)   </features>
	I1213 19:02:31.323664   20291 main.go:141] libmachine: (addons-649719)   <cpu mode='host-passthrough'>
	I1213 19:02:31.323678   20291 main.go:141] libmachine: (addons-649719)   
	I1213 19:02:31.323691   20291 main.go:141] libmachine: (addons-649719)   </cpu>
	I1213 19:02:31.323721   20291 main.go:141] libmachine: (addons-649719)   <os>
	I1213 19:02:31.323741   20291 main.go:141] libmachine: (addons-649719)     <type>hvm</type>
	I1213 19:02:31.323748   20291 main.go:141] libmachine: (addons-649719)     <boot dev='cdrom'/>
	I1213 19:02:31.323758   20291 main.go:141] libmachine: (addons-649719)     <boot dev='hd'/>
	I1213 19:02:31.323783   20291 main.go:141] libmachine: (addons-649719)     <bootmenu enable='no'/>
	I1213 19:02:31.323802   20291 main.go:141] libmachine: (addons-649719)   </os>
	I1213 19:02:31.323827   20291 main.go:141] libmachine: (addons-649719)   <devices>
	I1213 19:02:31.323845   20291 main.go:141] libmachine: (addons-649719)     <disk type='file' device='cdrom'>
	I1213 19:02:31.323862   20291 main.go:141] libmachine: (addons-649719)       <source file='/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/boot2docker.iso'/>
	I1213 19:02:31.323874   20291 main.go:141] libmachine: (addons-649719)       <target dev='hdc' bus='scsi'/>
	I1213 19:02:31.323883   20291 main.go:141] libmachine: (addons-649719)       <readonly/>
	I1213 19:02:31.323893   20291 main.go:141] libmachine: (addons-649719)     </disk>
	I1213 19:02:31.323904   20291 main.go:141] libmachine: (addons-649719)     <disk type='file' device='disk'>
	I1213 19:02:31.323916   20291 main.go:141] libmachine: (addons-649719)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1213 19:02:31.323936   20291 main.go:141] libmachine: (addons-649719)       <source file='/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/addons-649719.rawdisk'/>
	I1213 19:02:31.323948   20291 main.go:141] libmachine: (addons-649719)       <target dev='hda' bus='virtio'/>
	I1213 19:02:31.323963   20291 main.go:141] libmachine: (addons-649719)     </disk>
	I1213 19:02:31.323975   20291 main.go:141] libmachine: (addons-649719)     <interface type='network'>
	I1213 19:02:31.323985   20291 main.go:141] libmachine: (addons-649719)       <source network='mk-addons-649719'/>
	I1213 19:02:31.323995   20291 main.go:141] libmachine: (addons-649719)       <model type='virtio'/>
	I1213 19:02:31.324002   20291 main.go:141] libmachine: (addons-649719)     </interface>
	I1213 19:02:31.324009   20291 main.go:141] libmachine: (addons-649719)     <interface type='network'>
	I1213 19:02:31.324018   20291 main.go:141] libmachine: (addons-649719)       <source network='default'/>
	I1213 19:02:31.324029   20291 main.go:141] libmachine: (addons-649719)       <model type='virtio'/>
	I1213 19:02:31.324037   20291 main.go:141] libmachine: (addons-649719)     </interface>
	I1213 19:02:31.324049   20291 main.go:141] libmachine: (addons-649719)     <serial type='pty'>
	I1213 19:02:31.324059   20291 main.go:141] libmachine: (addons-649719)       <target port='0'/>
	I1213 19:02:31.324069   20291 main.go:141] libmachine: (addons-649719)     </serial>
	I1213 19:02:31.324077   20291 main.go:141] libmachine: (addons-649719)     <console type='pty'>
	I1213 19:02:31.324088   20291 main.go:141] libmachine: (addons-649719)       <target type='serial' port='0'/>
	I1213 19:02:31.324100   20291 main.go:141] libmachine: (addons-649719)     </console>
	I1213 19:02:31.324108   20291 main.go:141] libmachine: (addons-649719)     <rng model='virtio'>
	I1213 19:02:31.324116   20291 main.go:141] libmachine: (addons-649719)       <backend model='random'>/dev/random</backend>
	I1213 19:02:31.324126   20291 main.go:141] libmachine: (addons-649719)     </rng>
	I1213 19:02:31.324137   20291 main.go:141] libmachine: (addons-649719)     
	I1213 19:02:31.324153   20291 main.go:141] libmachine: (addons-649719)     
	I1213 19:02:31.324165   20291 main.go:141] libmachine: (addons-649719)   </devices>
	I1213 19:02:31.324175   20291 main.go:141] libmachine: (addons-649719) </domain>
	I1213 19:02:31.324188   20291 main.go:141] libmachine: (addons-649719) 
	I1213 19:02:31.329771   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:d4:1e:3f in network default
	I1213 19:02:31.330300   20291 main.go:141] libmachine: (addons-649719) starting domain...
	I1213 19:02:31.330320   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:31.330329   20291 main.go:141] libmachine: (addons-649719) ensuring networks are active...
	I1213 19:02:31.330831   20291 main.go:141] libmachine: (addons-649719) Ensuring network default is active
	I1213 19:02:31.331169   20291 main.go:141] libmachine: (addons-649719) Ensuring network mk-addons-649719 is active
	I1213 19:02:31.331588   20291 main.go:141] libmachine: (addons-649719) getting domain XML...
	I1213 19:02:31.332204   20291 main.go:141] libmachine: (addons-649719) creating domain...
	I1213 19:02:32.698282   20291 main.go:141] libmachine: (addons-649719) waiting for IP...
	I1213 19:02:32.699058   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:32.699430   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:32.699458   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:32.699412   20314 retry.go:31] will retry after 308.894471ms: waiting for domain to come up
	I1213 19:02:33.010171   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:33.010580   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:33.010615   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:33.010562   20314 retry.go:31] will retry after 284.369707ms: waiting for domain to come up
	I1213 19:02:33.297096   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:33.297510   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:33.297537   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:33.297488   20314 retry.go:31] will retry after 455.385881ms: waiting for domain to come up
	I1213 19:02:33.754166   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:33.754611   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:33.754637   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:33.754589   20314 retry.go:31] will retry after 439.340553ms: waiting for domain to come up
	I1213 19:02:34.195082   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:34.195554   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:34.195582   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:34.195529   20314 retry.go:31] will retry after 703.177309ms: waiting for domain to come up
	I1213 19:02:34.900606   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:34.901022   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:34.901071   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:34.901020   20314 retry.go:31] will retry after 639.233467ms: waiting for domain to come up
	I1213 19:02:35.541503   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:35.541933   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:35.541975   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:35.541926   20314 retry.go:31] will retry after 782.355402ms: waiting for domain to come up
	I1213 19:02:36.325584   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:36.325967   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:36.325984   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:36.325950   20314 retry.go:31] will retry after 1.329458891s: waiting for domain to come up
	I1213 19:02:37.657408   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:37.657773   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:37.657803   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:37.657767   20314 retry.go:31] will retry after 1.321375468s: waiting for domain to come up
	I1213 19:02:38.981391   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:38.981764   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:38.981781   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:38.981746   20314 retry.go:31] will retry after 1.935955387s: waiting for domain to come up
	I1213 19:02:40.919661   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:40.920103   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:40.920161   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:40.920098   20314 retry.go:31] will retry after 2.67995961s: waiting for domain to come up
	I1213 19:02:43.601128   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:43.601583   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:43.601609   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:43.601554   20314 retry.go:31] will retry after 3.028482314s: waiting for domain to come up
	I1213 19:02:46.631981   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:46.632417   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:46.632441   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:46.632396   20314 retry.go:31] will retry after 3.308087766s: waiting for domain to come up
	I1213 19:02:49.943819   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:49.944141   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:49.944158   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:49.944119   20314 retry.go:31] will retry after 4.38190267s: waiting for domain to come up
	I1213 19:02:54.331030   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.331457   20291 main.go:141] libmachine: (addons-649719) found domain IP: 192.168.39.191
	I1213 19:02:54.331488   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has current primary IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.331496   20291 main.go:141] libmachine: (addons-649719) reserving static IP address...
	I1213 19:02:54.331789   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find host DHCP lease matching {name: "addons-649719", mac: "52:54:00:9c:6b:aa", ip: "192.168.39.191"} in network mk-addons-649719
	I1213 19:02:54.398337   20291 main.go:141] libmachine: (addons-649719) reserved static IP address 192.168.39.191 for domain addons-649719
	I1213 19:02:54.398363   20291 main.go:141] libmachine: (addons-649719) waiting for SSH...
	I1213 19:02:54.398380   20291 main.go:141] libmachine: (addons-649719) DBG | Getting to WaitForSSH function...
	I1213 19:02:54.400646   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.400943   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:54.400968   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.401130   20291 main.go:141] libmachine: (addons-649719) DBG | Using SSH client type: external
	I1213 19:02:54.401158   20291 main.go:141] libmachine: (addons-649719) DBG | Using SSH private key: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa (-rw-------)
	I1213 19:02:54.401195   20291 main.go:141] libmachine: (addons-649719) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.191 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 19:02:54.401215   20291 main.go:141] libmachine: (addons-649719) DBG | About to run SSH command:
	I1213 19:02:54.401234   20291 main.go:141] libmachine: (addons-649719) DBG | exit 0
	I1213 19:02:54.530527   20291 main.go:141] libmachine: (addons-649719) DBG | SSH cmd err, output: <nil>: 
	I1213 19:02:54.530796   20291 main.go:141] libmachine: (addons-649719) KVM machine creation complete
	I1213 19:02:54.531070   20291 main.go:141] libmachine: (addons-649719) Calling .GetConfigRaw
	I1213 19:02:54.531608   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:02:54.531778   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:02:54.531900   20291 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1213 19:02:54.531915   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:02:54.533197   20291 main.go:141] libmachine: Detecting operating system of created instance...
	I1213 19:02:54.533211   20291 main.go:141] libmachine: Waiting for SSH to be available...
	I1213 19:02:54.533217   20291 main.go:141] libmachine: Getting to WaitForSSH function...
	I1213 19:02:54.533222   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:54.535534   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.535859   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:54.535884   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.536029   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:54.536211   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:54.536379   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:54.536506   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:54.536655   20291 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:54.536836   20291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1213 19:02:54.536848   20291 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1213 19:02:54.637685   20291 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:02:54.637716   20291 main.go:141] libmachine: Detecting the provisioner...
	I1213 19:02:54.637727   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:54.640358   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.640683   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:54.640711   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.640859   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:54.641027   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:54.641173   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:54.641309   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:54.641484   20291 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:54.641632   20291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1213 19:02:54.641642   20291 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1213 19:02:54.747123   20291 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1213 19:02:54.747202   20291 main.go:141] libmachine: found compatible host: buildroot
	I1213 19:02:54.747217   20291 main.go:141] libmachine: Provisioning with buildroot...
	I1213 19:02:54.747227   20291 main.go:141] libmachine: (addons-649719) Calling .GetMachineName
	I1213 19:02:54.747451   20291 buildroot.go:166] provisioning hostname "addons-649719"
	I1213 19:02:54.747478   20291 main.go:141] libmachine: (addons-649719) Calling .GetMachineName
	I1213 19:02:54.747675   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:54.750114   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.750509   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:54.750536   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.750715   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:54.750891   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:54.751032   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:54.751183   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:54.751327   20291 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:54.751472   20291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1213 19:02:54.751482   20291 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-649719 && echo "addons-649719" | sudo tee /etc/hostname
	I1213 19:02:54.867376   20291 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-649719
	
	I1213 19:02:54.867401   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:54.869855   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.870130   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:54.870155   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.870343   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:54.870506   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:54.870660   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:54.870805   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:54.870978   20291 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:54.871184   20291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1213 19:02:54.871203   20291 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-649719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-649719/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-649719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:02:54.983792   20291 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:02:54.983817   20291 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20090-12353/.minikube CaCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20090-12353/.minikube}
	I1213 19:02:54.983844   20291 buildroot.go:174] setting up certificates
	I1213 19:02:54.983854   20291 provision.go:84] configureAuth start
	I1213 19:02:54.983862   20291 main.go:141] libmachine: (addons-649719) Calling .GetMachineName
	I1213 19:02:54.984127   20291 main.go:141] libmachine: (addons-649719) Calling .GetIP
	I1213 19:02:54.986611   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.986907   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:54.986933   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.987046   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:54.989004   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.989331   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:54.989360   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.989428   20291 provision.go:143] copyHostCerts
	I1213 19:02:54.989533   20291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem (1675 bytes)
	I1213 19:02:54.989650   20291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem (1082 bytes)
	I1213 19:02:54.989706   20291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem (1123 bytes)
	I1213 19:02:54.989752   20291 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem org=jenkins.addons-649719 san=[127.0.0.1 192.168.39.191 addons-649719 localhost minikube]
	I1213 19:02:55.052653   20291 provision.go:177] copyRemoteCerts
	I1213 19:02:55.052703   20291 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:02:55.052724   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:55.054920   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.055200   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:55.055224   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.055421   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:55.055582   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:55.055708   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:55.055803   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:02:55.136334   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 19:02:55.158027   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 19:02:55.178943   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 19:02:55.199896   20291 provision.go:87] duration metric: took 216.031324ms to configureAuth
	I1213 19:02:55.199945   20291 buildroot.go:189] setting minikube options for container-runtime
	I1213 19:02:55.200111   20291 config.go:182] Loaded profile config "addons-649719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:02:55.200185   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:55.202574   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.202875   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:55.202902   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.203054   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:55.203221   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:55.203365   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:55.203526   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:55.203666   20291 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:55.203802   20291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1213 19:02:55.203815   20291 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:02:55.411750   20291 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:02:55.411782   20291 main.go:141] libmachine: Checking connection to Docker...
	I1213 19:02:55.411791   20291 main.go:141] libmachine: (addons-649719) Calling .GetURL
	I1213 19:02:55.413098   20291 main.go:141] libmachine: (addons-649719) DBG | using libvirt version 6000000
	I1213 19:02:55.415418   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.415759   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:55.415787   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.415957   20291 main.go:141] libmachine: Docker is up and running!
	I1213 19:02:55.415967   20291 main.go:141] libmachine: Reticulating splines...
	I1213 19:02:55.415973   20291 client.go:171] duration metric: took 25.193307341s to LocalClient.Create
	I1213 19:02:55.415994   20291 start.go:167] duration metric: took 25.193367401s to libmachine.API.Create "addons-649719"
	I1213 19:02:55.416007   20291 start.go:293] postStartSetup for "addons-649719" (driver="kvm2")
	I1213 19:02:55.416020   20291 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:02:55.416038   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:02:55.416259   20291 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:02:55.416284   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:55.418028   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.418282   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:55.418306   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.418416   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:55.418593   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:55.418735   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:55.418859   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:02:55.500168   20291 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:02:55.503708   20291 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 19:02:55.503726   20291 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-12353/.minikube/addons for local assets ...
	I1213 19:02:55.503781   20291 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-12353/.minikube/files for local assets ...
	I1213 19:02:55.503803   20291 start.go:296] duration metric: took 87.790722ms for postStartSetup
	I1213 19:02:55.503831   20291 main.go:141] libmachine: (addons-649719) Calling .GetConfigRaw
	I1213 19:02:55.504336   20291 main.go:141] libmachine: (addons-649719) Calling .GetIP
	I1213 19:02:55.506607   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.506971   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:55.507005   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.507242   20291 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/config.json ...
	I1213 19:02:55.507443   20291 start.go:128] duration metric: took 25.301449948s to createHost
	I1213 19:02:55.507465   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:55.509676   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.509992   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:55.510016   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.510148   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:55.510300   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:55.510459   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:55.510598   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:55.510718   20291 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:55.510900   20291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1213 19:02:55.510912   20291 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 19:02:55.614815   20291 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734116575.592233991
	
	I1213 19:02:55.614840   20291 fix.go:216] guest clock: 1734116575.592233991
	I1213 19:02:55.614900   20291 fix.go:229] Guest: 2024-12-13 19:02:55.592233991 +0000 UTC Remote: 2024-12-13 19:02:55.507455192 +0000 UTC m=+25.397340381 (delta=84.778799ms)
	I1213 19:02:55.614935   20291 fix.go:200] guest clock delta is within tolerance: 84.778799ms
	I1213 19:02:55.614940   20291 start.go:83] releasing machines lock for "addons-649719", held for 25.40902749s
	I1213 19:02:55.614965   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:02:55.615218   20291 main.go:141] libmachine: (addons-649719) Calling .GetIP
	I1213 19:02:55.617685   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.618009   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:55.618030   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.618152   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:02:55.618616   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:02:55.618763   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:02:55.618838   20291 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:02:55.618894   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:55.619009   20291 ssh_runner.go:195] Run: cat /version.json
	I1213 19:02:55.619034   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:55.621572   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.621774   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.621991   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:55.622012   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.622123   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:55.622246   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:55.622266   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.622285   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:55.622491   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:55.622507   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:55.622638   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:02:55.622649   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:55.622896   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:55.623019   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:02:55.730214   20291 ssh_runner.go:195] Run: systemctl --version
	I1213 19:02:55.735908   20291 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:02:55.887134   20291 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 19:02:55.893048   20291 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 19:02:55.893106   20291 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:02:55.907341   20291 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 19:02:55.907365   20291 start.go:495] detecting cgroup driver to use...
	I1213 19:02:55.907432   20291 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:02:55.921781   20291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:02:55.934253   20291 docker.go:217] disabling cri-docker service (if available) ...
	I1213 19:02:55.934301   20291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:02:55.946609   20291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:02:55.959054   20291 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:02:56.075739   20291 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:02:56.211389   20291 docker.go:233] disabling docker service ...
	I1213 19:02:56.211463   20291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:02:56.224909   20291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:02:56.236733   20291 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:02:56.368552   20291 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:02:56.500533   20291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:02:56.513226   20291 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:02:56.529786   20291 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1213 19:02:56.529851   20291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:56.539308   20291 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:02:56.539364   20291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:56.548827   20291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:56.558540   20291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:56.567956   20291 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:02:56.577771   20291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:56.587149   20291 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:56.602221   20291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:56.611777   20291 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:02:56.621794   20291 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 19:02:56.621835   20291 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 19:02:56.635123   20291 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:02:56.645184   20291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:02:56.782500   20291 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:02:56.871537   20291 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:02:56.871624   20291 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:02:56.875799   20291 start.go:563] Will wait 60s for crictl version
	I1213 19:02:56.875859   20291 ssh_runner.go:195] Run: which crictl
	I1213 19:02:56.879225   20291 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 19:02:56.916160   20291 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 19:02:56.916274   20291 ssh_runner.go:195] Run: crio --version
	I1213 19:02:56.941598   20291 ssh_runner.go:195] Run: crio --version
	I1213 19:02:56.969503   20291 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1213 19:02:56.970660   20291 main.go:141] libmachine: (addons-649719) Calling .GetIP
	I1213 19:02:56.973112   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:56.973407   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:56.973431   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:56.973610   20291 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 19:02:56.977269   20291 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:02:56.988821   20291 kubeadm.go:883] updating cluster {Name:addons-649719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-649719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 19:02:56.988912   20291 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:02:56.988952   20291 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:02:57.017866   20291 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1213 19:02:57.017924   20291 ssh_runner.go:195] Run: which lz4
	I1213 19:02:57.021534   20291 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 19:02:57.025150   20291 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 19:02:57.025176   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1213 19:02:58.107948   20291 crio.go:462] duration metric: took 1.086435606s to copy over tarball
	I1213 19:02:58.108016   20291 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 19:03:00.167046   20291 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.059000994s)
	I1213 19:03:00.167075   20291 crio.go:469] duration metric: took 2.059102811s to extract the tarball
	I1213 19:03:00.167084   20291 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 19:03:00.214289   20291 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:03:00.252257   20291 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:03:00.252279   20291 cache_images.go:84] Images are preloaded, skipping loading
	I1213 19:03:00.252286   20291 kubeadm.go:934] updating node { 192.168.39.191 8443 v1.31.2 crio true true} ...
	I1213 19:03:00.252380   20291 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-649719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-649719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:03:00.252457   20291 ssh_runner.go:195] Run: crio config
	I1213 19:03:00.295464   20291 cni.go:84] Creating CNI manager for ""
	I1213 19:03:00.295490   20291 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 19:03:00.295509   20291 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1213 19:03:00.295534   20291 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.191 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-649719 NodeName:addons-649719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.191 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 19:03:00.295683   20291 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.191
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-649719"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.191"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.191"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 19:03:00.295757   20291 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 19:03:00.305198   20291 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 19:03:00.305252   20291 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 19:03:00.313821   20291 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1213 19:03:00.328761   20291 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 19:03:00.343238   20291 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1213 19:03:00.357697   20291 ssh_runner.go:195] Run: grep 192.168.39.191	control-plane.minikube.internal$ /etc/hosts
	I1213 19:03:00.361041   20291 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.191	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:03:00.371780   20291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:03:00.487741   20291 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:03:00.504412   20291 certs.go:68] Setting up /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719 for IP: 192.168.39.191
	I1213 19:03:00.504442   20291 certs.go:194] generating shared ca certs ...
	I1213 19:03:00.504463   20291 certs.go:226] acquiring lock for ca certs: {Name:mka8994129240986519f4b0ac41f1e4e27ada985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:00.504626   20291 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key
	I1213 19:03:00.607732   20291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt ...
	I1213 19:03:00.607758   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt: {Name:mkbfe6eb30bb8ad75f44083b09196d4656fd8b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:00.608382   20291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key ...
	I1213 19:03:00.608398   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key: {Name:mk423e5e304b1945183e810a237f3c28213efcd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:00.608499   20291 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key
	I1213 19:03:00.724203   20291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.crt ...
	I1213 19:03:00.724228   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.crt: {Name:mk643f1f713df237848413aeec087dacce1c8826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:00.724384   20291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key ...
	I1213 19:03:00.724400   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key: {Name:mkb212e911818f44d31f4f50e68bf9bf8949fc38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:00.724487   20291 certs.go:256] generating profile certs ...
	I1213 19:03:00.724551   20291 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.key
	I1213 19:03:00.724566   20291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt with IP's: []
	I1213 19:03:00.901588   20291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt ...
	I1213 19:03:00.901615   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: {Name:mkcdd50e72c448911a91bb57ba2b3c72dc3c1456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:00.901784   20291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.key ...
	I1213 19:03:00.901813   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.key: {Name:mk4b17521c748ccfd051d1fa287b436fe3eaa077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:00.901903   20291 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.key.f75a0503
	I1213 19:03:00.901927   20291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.crt.f75a0503 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.191]
	I1213 19:03:00.959618   20291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.crt.f75a0503 ...
	I1213 19:03:00.959640   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.crt.f75a0503: {Name:mk10c115767c744fbf65f9973a5d604f0d575ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:00.959798   20291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.key.f75a0503 ...
	I1213 19:03:00.959814   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.key.f75a0503: {Name:mk639b04307cad2c5f86a67ddc271fae9f7f0db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:00.959899   20291 certs.go:381] copying /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.crt.f75a0503 -> /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.crt
	I1213 19:03:00.959989   20291 certs.go:385] copying /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.key.f75a0503 -> /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.key
	I1213 19:03:00.960061   20291 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/proxy-client.key
	I1213 19:03:00.960091   20291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/proxy-client.crt with IP's: []
	I1213 19:03:01.047370   20291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/proxy-client.crt ...
	I1213 19:03:01.047394   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/proxy-client.crt: {Name:mk92e6f8bbccbfa9955ed41e3b9a578eead1de7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:01.047554   20291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/proxy-client.key ...
	I1213 19:03:01.047570   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/proxy-client.key: {Name:mkbfc849d9075d20f333d3bfa98996df9a8ea9d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:01.047767   20291 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem (1679 bytes)
	I1213 19:03:01.047809   20291 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem (1082 bytes)
	I1213 19:03:01.047870   20291 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:03:01.047912   20291 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem (1675 bytes)
	I1213 19:03:01.048988   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:03:01.073670   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 19:03:01.094532   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:03:01.115260   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 19:03:01.135860   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 19:03:01.156677   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 19:03:01.190615   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:03:01.222785   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 19:03:01.244994   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:03:01.265476   20291 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 19:03:01.279792   20291 ssh_runner.go:195] Run: openssl version
	I1213 19:03:01.285146   20291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 19:03:01.294543   20291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:03:01.298539   20291 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:03:01.298584   20291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:03:01.303667   20291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 19:03:01.312872   20291 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:03:01.316422   20291 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 19:03:01.316467   20291 kubeadm.go:392] StartCluster: {Name:addons-649719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-649719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:03:01.316539   20291 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 19:03:01.316577   20291 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 19:03:01.349078   20291 cri.go:89] found id: ""
	I1213 19:03:01.349135   20291 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 19:03:01.358168   20291 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 19:03:01.367221   20291 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 19:03:01.375768   20291 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 19:03:01.375783   20291 kubeadm.go:157] found existing configuration files:
	
	I1213 19:03:01.375812   20291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 19:03:01.383897   20291 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 19:03:01.383947   20291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 19:03:01.392589   20291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 19:03:01.400709   20291 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 19:03:01.400749   20291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 19:03:01.409098   20291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 19:03:01.417065   20291 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 19:03:01.417103   20291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 19:03:01.425289   20291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 19:03:01.433078   20291 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 19:03:01.433123   20291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 19:03:01.441151   20291 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 19:03:01.582195   20291 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 19:03:11.707877   20291 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1213 19:03:11.707932   20291 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 19:03:11.707991   20291 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 19:03:11.708075   20291 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 19:03:11.708156   20291 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 19:03:11.708208   20291 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 19:03:11.709654   20291 out.go:235]   - Generating certificates and keys ...
	I1213 19:03:11.709714   20291 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 19:03:11.709786   20291 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 19:03:11.709860   20291 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 19:03:11.709912   20291 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1213 19:03:11.709963   20291 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1213 19:03:11.710006   20291 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1213 19:03:11.710050   20291 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1213 19:03:11.710148   20291 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-649719 localhost] and IPs [192.168.39.191 127.0.0.1 ::1]
	I1213 19:03:11.710204   20291 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1213 19:03:11.710322   20291 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-649719 localhost] and IPs [192.168.39.191 127.0.0.1 ::1]
	I1213 19:03:11.710380   20291 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 19:03:11.710437   20291 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 19:03:11.710504   20291 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1213 19:03:11.710592   20291 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 19:03:11.710674   20291 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 19:03:11.710783   20291 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 19:03:11.710836   20291 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 19:03:11.710917   20291 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 19:03:11.710970   20291 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 19:03:11.711044   20291 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 19:03:11.711108   20291 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 19:03:11.712259   20291 out.go:235]   - Booting up control plane ...
	I1213 19:03:11.712368   20291 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 19:03:11.712464   20291 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 19:03:11.712553   20291 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 19:03:11.712677   20291 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 19:03:11.712798   20291 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 19:03:11.712841   20291 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 19:03:11.712958   20291 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 19:03:11.713077   20291 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 19:03:11.713133   20291 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001092244s
	I1213 19:03:11.713193   20291 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1213 19:03:11.713242   20291 kubeadm.go:310] [api-check] The API server is healthy after 4.502177067s
	I1213 19:03:11.713328   20291 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 19:03:11.713458   20291 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 19:03:11.713525   20291 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 19:03:11.713695   20291 kubeadm.go:310] [mark-control-plane] Marking the node addons-649719 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 19:03:11.713743   20291 kubeadm.go:310] [bootstrap-token] Using token: fm4k4c.240oitggzttgdkur
	I1213 19:03:11.714993   20291 out.go:235]   - Configuring RBAC rules ...
	I1213 19:03:11.715098   20291 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 19:03:11.715191   20291 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 19:03:11.715347   20291 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 19:03:11.715457   20291 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 19:03:11.715559   20291 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 19:03:11.715645   20291 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 19:03:11.715767   20291 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 19:03:11.715818   20291 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1213 19:03:11.715883   20291 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1213 19:03:11.715890   20291 kubeadm.go:310] 
	I1213 19:03:11.715975   20291 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1213 19:03:11.715983   20291 kubeadm.go:310] 
	I1213 19:03:11.716051   20291 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1213 19:03:11.716062   20291 kubeadm.go:310] 
	I1213 19:03:11.716088   20291 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1213 19:03:11.716141   20291 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 19:03:11.716188   20291 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 19:03:11.716194   20291 kubeadm.go:310] 
	I1213 19:03:11.716244   20291 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1213 19:03:11.716252   20291 kubeadm.go:310] 
	I1213 19:03:11.716290   20291 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 19:03:11.716296   20291 kubeadm.go:310] 
	I1213 19:03:11.716342   20291 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1213 19:03:11.716404   20291 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 19:03:11.716479   20291 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 19:03:11.716488   20291 kubeadm.go:310] 
	I1213 19:03:11.716578   20291 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 19:03:11.716647   20291 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1213 19:03:11.716653   20291 kubeadm.go:310] 
	I1213 19:03:11.716724   20291 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fm4k4c.240oitggzttgdkur \
	I1213 19:03:11.716835   20291 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b927cc699f96ad11d9aa77520496913d5873f96a2e411ce1bcbe6def5a1747ad \
	I1213 19:03:11.716856   20291 kubeadm.go:310] 	--control-plane 
	I1213 19:03:11.716862   20291 kubeadm.go:310] 
	I1213 19:03:11.716930   20291 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1213 19:03:11.716942   20291 kubeadm.go:310] 
	I1213 19:03:11.717013   20291 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fm4k4c.240oitggzttgdkur \
	I1213 19:03:11.717176   20291 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b927cc699f96ad11d9aa77520496913d5873f96a2e411ce1bcbe6def5a1747ad 
	I1213 19:03:11.717196   20291 cni.go:84] Creating CNI manager for ""
	I1213 19:03:11.717209   20291 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 19:03:11.718463   20291 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 19:03:11.719496   20291 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 19:03:11.731103   20291 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 19:03:11.748875   20291 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 19:03:11.748961   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:11.749002   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-649719 minikube.k8s.io/updated_at=2024_12_13T19_03_11_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956 minikube.k8s.io/name=addons-649719 minikube.k8s.io/primary=true
	I1213 19:03:11.870977   20291 ops.go:34] apiserver oom_adj: -16
	I1213 19:03:11.871082   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:12.371950   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:12.872047   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:13.371422   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:13.871320   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:14.371786   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:14.872099   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:15.371485   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:15.872077   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:16.371337   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:16.511233   20291 kubeadm.go:1113] duration metric: took 4.762339966s to wait for elevateKubeSystemPrivileges
	I1213 19:03:16.511271   20291 kubeadm.go:394] duration metric: took 15.194808803s to StartCluster
	I1213 19:03:16.511298   20291 settings.go:142] acquiring lock: {Name:mkc90da34b53323b31b6e69f8fab5ad7b1bdb254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:16.511421   20291 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 19:03:16.511877   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/kubeconfig: {Name:mkeeacf16d2513309766df13b67a96dd252bc4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:16.512106   20291 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 19:03:16.512110   20291 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:03:16.512178   20291 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 19:03:16.512278   20291 addons.go:69] Setting yakd=true in profile "addons-649719"
	I1213 19:03:16.512300   20291 addons.go:234] Setting addon yakd=true in "addons-649719"
	I1213 19:03:16.512299   20291 addons.go:69] Setting ingress-dns=true in profile "addons-649719"
	I1213 19:03:16.512306   20291 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-649719"
	I1213 19:03:16.512326   20291 addons.go:234] Setting addon ingress-dns=true in "addons-649719"
	I1213 19:03:16.512331   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.512329   20291 addons.go:69] Setting registry=true in profile "addons-649719"
	I1213 19:03:16.512326   20291 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-649719"
	I1213 19:03:16.512350   20291 addons.go:234] Setting addon registry=true in "addons-649719"
	I1213 19:03:16.512356   20291 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-649719"
	I1213 19:03:16.512372   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.512375   20291 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-649719"
	I1213 19:03:16.512379   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.512386   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.512399   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.512773   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.512778   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.512794   20291 addons.go:69] Setting cloud-spanner=true in profile "addons-649719"
	I1213 19:03:16.512803   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.512802   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.512807   20291 addons.go:234] Setting addon cloud-spanner=true in "addons-649719"
	I1213 19:03:16.512814   20291 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-649719"
	I1213 19:03:16.512818   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.512831   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.512836   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.512846   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.512855   20291 addons.go:69] Setting volumesnapshots=true in profile "addons-649719"
	I1213 19:03:16.512868   20291 addons.go:234] Setting addon volumesnapshots=true in "addons-649719"
	I1213 19:03:16.512888   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.512903   20291 addons.go:69] Setting metrics-server=true in profile "addons-649719"
	I1213 19:03:16.512905   20291 config.go:182] Loaded profile config "addons-649719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:03:16.512919   20291 addons.go:234] Setting addon metrics-server=true in "addons-649719"
	I1213 19:03:16.512947   20291 addons.go:69] Setting gcp-auth=true in profile "addons-649719"
	I1213 19:03:16.512957   20291 addons.go:69] Setting inspektor-gadget=true in profile "addons-649719"
	I1213 19:03:16.512962   20291 mustload.go:65] Loading cluster: addons-649719
	I1213 19:03:16.512970   20291 addons.go:234] Setting addon inspektor-gadget=true in "addons-649719"
	I1213 19:03:16.512989   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.513056   20291 addons.go:69] Setting ingress=true in profile "addons-649719"
	I1213 19:03:16.513069   20291 addons.go:234] Setting addon ingress=true in "addons-649719"
	I1213 19:03:16.513097   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.513126   20291 config.go:182] Loaded profile config "addons-649719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:03:16.513168   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.513194   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.513278   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.513305   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.513368   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.513394   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.513461   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.513493   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.513512   20291 addons.go:69] Setting default-storageclass=true in profile "addons-649719"
	I1213 19:03:16.513558   20291 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-649719"
	I1213 19:03:16.512948   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.512846   20291 addons.go:69] Setting volcano=true in profile "addons-649719"
	I1213 19:03:16.513730   20291 addons.go:234] Setting addon volcano=true in "addons-649719"
	I1213 19:03:16.513759   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.512806   20291 addons.go:69] Setting storage-provisioner=true in profile "addons-649719"
	I1213 19:03:16.513781   20291 addons.go:234] Setting addon storage-provisioner=true in "addons-649719"
	I1213 19:03:16.513805   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.513926   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.513952   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.514019   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.514043   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.512838   20291 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-649719"
	I1213 19:03:16.514130   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.514149   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.514150   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.514176   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.512806   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.514787   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.513500   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.514796   20291 out.go:177] * Verifying Kubernetes components...
	I1213 19:03:16.513528   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.514269   20291 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-649719"
	I1213 19:03:16.515241   20291 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-649719"
	I1213 19:03:16.515270   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.516383   20291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:03:16.514291   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.514484   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.520829   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.534492   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I1213 19:03:16.534665   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I1213 19:03:16.534879   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46793
	I1213 19:03:16.534918   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37205
	I1213 19:03:16.535236   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.535335   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.535782   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.535801   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.535820   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.535858   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.535884   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.536463   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.536480   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.536549   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.536842   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.536898   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.537452   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.537488   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.538364   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.538385   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.538801   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.538839   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.540276   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.540423   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.540446   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.540899   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.541475   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.541519   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.542581   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.542623   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.562584   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I1213 19:03:16.563123   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.563721   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.563750   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.564131   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.564314   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.564748   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I1213 19:03:16.565135   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.565355   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43271
	I1213 19:03:16.565878   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.565898   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.565914   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.566298   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.566439   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.566459   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.567080   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.567119   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.567349   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.567897   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.567947   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.569506   20291 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-649719"
	I1213 19:03:16.569552   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.569917   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.569958   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.570305   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37787
	I1213 19:03:16.570810   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.571380   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.571400   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.571794   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.572059   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.582926   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44017
	I1213 19:03:16.582947   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32825
	I1213 19:03:16.582956   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I1213 19:03:16.582972   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.582927   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1213 19:03:16.583379   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.583421   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.583890   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.583981   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.584034   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.584088   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.585384   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.585407   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.585392   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.585468   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.585496   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.585516   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.586071   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.586080   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.586128   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.586202   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36959
	I1213 19:03:16.586312   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.586311   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.586786   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.586817   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.587295   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.587668   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.587690   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.587750   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.587856   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.588045   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.588549   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.588584   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.589908   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
	I1213 19:03:16.589931   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.589985   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
	I1213 19:03:16.590213   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.590751   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.590766   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.590824   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.591386   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.591417   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.591749   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.591885   20291 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1213 19:03:16.592542   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.593188   20291 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 19:03:16.593206   20291 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 19:03:16.593229   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.593316   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.594051   20291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 19:03:16.595140   20291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 19:03:16.595523   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42467
	I1213 19:03:16.596304   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.596827   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.596845   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.597032   20291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 19:03:16.597831   20291 addons.go:234] Setting addon default-storageclass=true in "addons-649719"
	I1213 19:03:16.597872   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.598216   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.598249   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.598461   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.599156   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.599236   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.599281   20291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 19:03:16.599612   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.599632   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.600258   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.600430   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.600577   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.600794   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.601209   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.601301   20291 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 19:03:16.602401   20291 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1213 19:03:16.603419   20291 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 19:03:16.603525   20291 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1213 19:03:16.603550   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 19:03:16.603573   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.605528   20291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 19:03:16.606098   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43613
	I1213 19:03:16.606462   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.606896   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.606920   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.606960   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.607297   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.607467   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.607542   20291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 19:03:16.607792   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.607817   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.607973   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.608114   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.608217   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.608286   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.608497   20291 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 19:03:16.608515   20291 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 19:03:16.608534   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.609497   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45597
	I1213 19:03:16.609841   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.610259   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.610289   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.610613   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.611176   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.611226   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.611292   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.611622   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.611691   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.611708   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.611876   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.611910   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.612058   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.612092   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.612253   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.612374   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.612488   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.612616   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.612655   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.616895   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37693
	I1213 19:03:16.617404   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.617909   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.617927   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.618273   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.618798   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.618831   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.618924   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36685
	I1213 19:03:16.619648   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.620135   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.620159   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.620523   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.620691   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.622208   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.623896   20291 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1213 19:03:16.624966   20291 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1213 19:03:16.624985   20291 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1213 19:03:16.625004   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.627984   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40501
	I1213 19:03:16.628163   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.628491   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.628489   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.628572   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.628606   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.628738   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.628878   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.628997   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.629365   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.629378   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.629782   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.629930   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.631428   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.632943   20291 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 19:03:16.634668   20291 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 19:03:16.634686   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 19:03:16.634703   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.636009   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I1213 19:03:16.636565   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.637047   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44635
	I1213 19:03:16.637538   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.637554   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.637963   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.637977   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.638094   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.638116   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.638556   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.638590   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.638826   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.638887   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.638826   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38563
	I1213 19:03:16.639452   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.639528   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.639546   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.639530   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.640409   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.640463   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.640483   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.640499   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.640681   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.640929   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.640978   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.641352   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40717
	I1213 19:03:16.641985   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.642027   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.642292   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.642801   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.642816   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.643211   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.643350   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.643412   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.643576   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38089
	I1213 19:03:16.643958   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.644640   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.644662   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.644935   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.645142   20291 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 19:03:16.645153   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.645759   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.646239   20291 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 19:03:16.646254   20291 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 19:03:16.646271   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.647488   20291 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1213 19:03:16.648123   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.648479   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33483
	I1213 19:03:16.649137   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.649222   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.649507   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.649526   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.649748   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.649770   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.649891   20291 out.go:177]   - Using image docker.io/registry:2.8.3
	I1213 19:03:16.650009   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.650242   20291 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1213 19:03:16.650342   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.650365   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.650746   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.650835   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.650967   20291 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 19:03:16.650977   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 19:03:16.650991   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.651572   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.652009   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42843
	I1213 19:03:16.652432   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.652905   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.652922   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.653278   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.653432   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.653548   20291 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1213 19:03:16.654148   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.654524   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.654975   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.655013   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.655243   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.655481   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.655637   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.655685   20291 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1213 19:03:16.655727   20291 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1213 19:03:16.655780   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.656032   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.656901   20291 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 19:03:16.656924   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 19:03:16.657092   20291 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 19:03:16.657109   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 19:03:16.657123   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.657169   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.657306   20291 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1213 19:03:16.657510   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34803
	I1213 19:03:16.658186   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.659162   20291 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 19:03:16.659181   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1213 19:03:16.659198   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.659354   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.659371   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.660208   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.661272   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.661316   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.661605   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.662495   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.662596   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36487
	I1213 19:03:16.662888   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I1213 19:03:16.663273   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.663495   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.663515   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.663673   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.663683   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.663738   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.663752   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.663779   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.663967   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.664010   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.664072   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.664115   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.664250   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.664261   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.664340   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.664425   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.664762   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.664785   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.665779   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.665805   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.665968   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.666154   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.666292   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.666450   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.666457   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.666532   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.666746   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:16.666758   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:16.666973   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:16.666984   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:16.666991   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:16.666997   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:16.667146   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:16.667159   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:16.667161   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	W1213 19:03:16.667229   20291 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1213 19:03:16.667879   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.667895   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.673231   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.673419   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.675118   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.675525   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39165
	I1213 19:03:16.675824   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.676139   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.676151   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.676363   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.676443   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.676730   20291 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1213 19:03:16.677409   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38779
	I1213 19:03:16.677795   20291 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 19:03:16.677812   20291 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 19:03:16.677830   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.677849   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.677931   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.678265   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.678276   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.678600   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.678948   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.679127   20291 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 19:03:16.680526   20291 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 19:03:16.680546   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 19:03:16.680561   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.681039   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.681459   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.681895   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.681925   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.682254   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.682388   20291 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 19:03:16.682396   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.682543   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.682770   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	W1213 19:03:16.683488   20291 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54236->192.168.39.191:22: read: connection reset by peer
	I1213 19:03:16.683508   20291 retry.go:31] will retry after 127.895773ms: ssh: handshake failed: read tcp 192.168.39.1:54236->192.168.39.191:22: read: connection reset by peer
	I1213 19:03:16.683699   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40769
	I1213 19:03:16.683795   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.684037   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.684098   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.684107   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.684383   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.684498   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.684625   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.684636   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.684647   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.684730   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.684917   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.685056   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.686186   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.686380   20291 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 19:03:16.686393   20291 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 19:03:16.686409   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.688432   20291 out.go:177]   - Using image docker.io/busybox:stable
	I1213 19:03:16.688824   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.689204   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.689234   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.689372   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.689498   20291 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 19:03:16.689513   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 19:03:16.689528   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.689532   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.689669   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.689814   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.692254   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.692639   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.692686   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.692887   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.693086   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.693192   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.693282   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	W1213 19:03:16.694320   20291 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54270->192.168.39.191:22: read: connection reset by peer
	I1213 19:03:16.694338   20291 retry.go:31] will retry after 279.431936ms: ssh: handshake failed: read tcp 192.168.39.1:54270->192.168.39.191:22: read: connection reset by peer
	I1213 19:03:16.896284   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 19:03:16.962822   20291 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:03:16.967415   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 19:03:16.993018   20291 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 19:03:17.096990   20291 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 19:03:17.097020   20291 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 19:03:17.109501   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 19:03:17.162145   20291 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 19:03:17.162176   20291 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 19:03:17.185774   20291 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 19:03:17.185802   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1213 19:03:17.207040   20291 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 19:03:17.207061   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 19:03:17.218756   20291 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 19:03:17.218772   20291 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 19:03:17.230826   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 19:03:17.252101   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 19:03:17.266091   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 19:03:17.293625   20291 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 19:03:17.293647   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 19:03:17.296471   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 19:03:17.309401   20291 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 19:03:17.309432   20291 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 19:03:17.358425   20291 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 19:03:17.358453   20291 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 19:03:17.422885   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 19:03:17.434841   20291 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 19:03:17.434875   20291 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 19:03:17.471377   20291 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 19:03:17.471400   20291 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 19:03:17.489584   20291 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 19:03:17.489607   20291 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 19:03:17.497937   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 19:03:17.569936   20291 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 19:03:17.569969   20291 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 19:03:17.579596   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 19:03:17.579862   20291 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 19:03:17.579879   20291 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 19:03:17.606224   20291 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 19:03:17.606250   20291 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 19:03:17.676448   20291 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 19:03:17.676474   20291 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 19:03:17.716191   20291 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 19:03:17.716213   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 19:03:17.778820   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 19:03:17.805163   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 19:03:17.870233   20291 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 19:03:17.870265   20291 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 19:03:17.918471   20291 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 19:03:17.918499   20291 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1213 19:03:18.176904   20291 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 19:03:18.176936   20291 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 19:03:18.178095   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.281776665s)
	I1213 19:03:18.178132   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:18.178143   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:18.178148   20291 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.215289586s)
	I1213 19:03:18.178430   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:18.178445   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:18.178455   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:18.178459   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:18.178463   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:18.179123   20291 node_ready.go:35] waiting up to 6m0s for node "addons-649719" to be "Ready" ...
	I1213 19:03:18.179388   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:18.179390   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:18.179407   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:18.196890   20291 node_ready.go:49] node "addons-649719" has status "Ready":"True"
	I1213 19:03:18.196914   20291 node_ready.go:38] duration metric: took 17.750184ms for node "addons-649719" to be "Ready" ...
	I1213 19:03:18.196926   20291 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 19:03:18.210263   20291 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 19:03:18.210295   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 19:03:18.210632   20291 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jq5cx" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:18.306648   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.339199811s)
	I1213 19:03:18.306745   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:18.306760   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:18.307085   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:18.307106   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:18.307115   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:18.307122   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:18.307146   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:18.307356   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:18.307375   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:18.307387   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:18.320357   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:18.320373   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:18.320670   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:18.320689   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:18.320675   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:18.366139   20291 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 19:03:18.366164   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 19:03:18.519724   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 19:03:18.656604   20291 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 19:03:18.656631   20291 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 19:03:18.843458   20291 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 19:03:18.843490   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 19:03:18.969355   20291 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 19:03:18.969379   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 19:03:19.005779   20291 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.012727526s)
	I1213 19:03:19.005815   20291 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1213 19:03:19.277142   20291 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 19:03:19.277165   20291 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1213 19:03:19.488293   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 19:03:19.536872   20291 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-649719" context rescaled to 1 replicas
	I1213 19:03:20.281103   20291 pod_ready.go:103] pod "coredns-7c65d6cfc9-jq5cx" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:22.754990   20291 pod_ready.go:103] pod "coredns-7c65d6cfc9-jq5cx" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:23.619081   20291 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 19:03:23.619129   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:23.622085   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:23.622554   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:23.622585   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:23.622734   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:23.622977   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:23.623151   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:23.623307   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:24.033040   20291 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 19:03:24.113430   20291 addons.go:234] Setting addon gcp-auth=true in "addons-649719"
	I1213 19:03:24.113488   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:24.113780   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:24.113825   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:24.129361   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38787
	I1213 19:03:24.129747   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:24.130240   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:24.130259   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:24.130613   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:24.131167   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:24.131205   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:24.146199   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35019
	I1213 19:03:24.147227   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:24.147785   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:24.147808   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:24.148121   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:24.148323   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:24.150000   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:24.150251   20291 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 19:03:24.150271   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:24.153257   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:24.153741   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:24.153765   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:24.153964   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:24.154133   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:24.154280   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:24.154454   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:24.414218   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.304680254s)
	I1213 19:03:24.414269   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414279   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.414341   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.183483555s)
	I1213 19:03:24.414383   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.162259123s)
	I1213 19:03:24.414402   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414383   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414427   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.414412   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.414475   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.148353579s)
	I1213 19:03:24.414507   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.118019181s)
	I1213 19:03:24.414519   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414526   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.414532   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414531   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.414543   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.414549   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.414558   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414567   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.414631   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.991724351s)
	I1213 19:03:24.414660   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414669   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.414733   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.916772943s)
	I1213 19:03:24.414747   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414755   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.414833   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.835213799s)
	I1213 19:03:24.414867   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414877   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.414971   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.636118841s)
	I1213 19:03:24.414990   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414998   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.415070   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.609880098s)
	I1213 19:03:24.415086   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.415096   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.415220   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.895466762s)
	W1213 19:03:24.415250   20291 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 19:03:24.415283   20291 retry.go:31] will retry after 156.830153ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 19:03:24.415427   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.415464   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.415472   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.415481   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.415488   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.415537   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.415558   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.415564   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.415571   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.415578   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.415620   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.415639   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.415646   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.415653   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.415659   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.415696   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.415714   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.415721   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.415728   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.415734   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.415769   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.415786   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.415795   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.415802   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.415809   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.415843   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.415860   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.415866   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.415875   20291 addons.go:475] Verifying addon ingress=true in "addons-649719"
	I1213 19:03:24.416098   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.416131   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.416138   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.416145   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.416151   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.416588   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.416601   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.416611   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.416619   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.416845   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.416880   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.416888   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.417017   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.417040   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.417055   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.417063   20291 addons.go:475] Verifying addon metrics-server=true in "addons-649719"
	I1213 19:03:24.417161   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.417199   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.417207   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.417362   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.417389   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.417396   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.417405   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.417413   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.417460   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.417478   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.417485   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.417492   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.417499   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.417940   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.417969   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.417977   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.418087   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.418108   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.418114   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.419107   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.419126   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.419149   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.419155   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.419290   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.419323   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.419330   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.419349   20291 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-649719 service yakd-dashboard -n yakd-dashboard
	
	I1213 19:03:24.419999   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.420008   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.420607   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.420635   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.420642   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.420650   20291 addons.go:475] Verifying addon registry=true in "addons-649719"
	I1213 19:03:24.422211   20291 out.go:177] * Verifying ingress addon...
	I1213 19:03:24.422211   20291 out.go:177] * Verifying registry addon...
	I1213 19:03:24.424760   20291 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 19:03:24.424826   20291 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 19:03:24.439180   20291 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 19:03:24.439202   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:24.439382   20291 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 19:03:24.439395   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:24.463333   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.463357   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.463666   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.463685   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.573083   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 19:03:24.929709   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:24.931308   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:25.231691   20291 pod_ready.go:103] pod "coredns-7c65d6cfc9-jq5cx" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:25.434089   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:25.434339   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:25.843488   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.355137443s)
	I1213 19:03:25.843541   20291 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.693270538s)
	I1213 19:03:25.843545   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:25.843563   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:25.843824   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:25.843886   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:25.843900   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:25.843915   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:25.843922   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:25.844227   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:25.844245   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:25.844245   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:25.844256   20291 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-649719"
	I1213 19:03:25.845231   20291 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1213 19:03:25.846238   20291 out.go:177] * Verifying csi-hostpath-driver addon...
	I1213 19:03:25.847822   20291 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 19:03:25.848555   20291 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1213 19:03:25.849170   20291 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 19:03:25.849193   20291 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 19:03:25.865655   20291 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 19:03:25.865677   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:25.931958   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:25.932601   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:25.946794   20291 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 19:03:25.946829   20291 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 19:03:26.075425   20291 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 19:03:26.075453   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 19:03:26.142029   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 19:03:26.353328   20291 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 19:03:26.353357   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:26.429245   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:26.429436   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:26.546253   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.97312431s)
	I1213 19:03:26.546306   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:26.546323   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:26.546600   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:26.546622   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:26.546632   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:26.546639   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:26.548091   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:26.548120   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:26.548133   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:26.853114   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:26.969034   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:26.969476   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:27.195426   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.053356237s)
	I1213 19:03:27.195468   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:27.195478   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:27.195740   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:27.195764   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:27.195810   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:27.195825   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:27.195832   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:27.196047   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:27.196063   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:27.197306   20291 addons.go:475] Verifying addon gcp-auth=true in "addons-649719"
	I1213 19:03:27.198972   20291 out.go:177] * Verifying gcp-auth addon...
	I1213 19:03:27.201201   20291 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 19:03:27.206570   20291 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 19:03:27.206587   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:27.244461   20291 pod_ready.go:103] pod "coredns-7c65d6cfc9-jq5cx" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:27.367117   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:27.436882   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:27.437100   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:27.705847   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:27.853159   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:27.929613   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:27.929767   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:28.204661   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:28.218200   20291 pod_ready.go:98] pod "coredns-7c65d6cfc9-jq5cx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:27 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:16 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:16 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:16 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:16 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.191 HostIPs:[{IP:192.168.39
.191}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-12-13 19:03:16 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-12-13 19:03:21 +0000 UTC,FinishedAt:2024-12-13 19:03:27 +0000 UTC,ContainerID:cri-o://a0665184c73066d9aea83dc1ca6c748e434eb138f6cdd6123bdb9244889eb306,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://a0665184c73066d9aea83dc1ca6c748e434eb138f6cdd6123bdb9244889eb306 Started:0xc001eb6a40 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001d346a0} {Name:kube-api-access-w69mj MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001d346b0}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1213 19:03:28.218231   20291 pod_ready.go:82] duration metric: took 10.007571683s for pod "coredns-7c65d6cfc9-jq5cx" in "kube-system" namespace to be "Ready" ...
	E1213 19:03:28.218242   20291 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-jq5cx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:27 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:16 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:16 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:16 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:16 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.191 HostIPs:[{IP:192.168.39.191}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-12-13 19:03:16 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-12-13 19:03:21 +0000 UTC,FinishedAt:2024-12-13 19:03:27 +0000 UTC,ContainerID:cri-o://a0665184c73066d9aea83dc1ca6c748e434eb138f6cdd6123bdb9244889eb306,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://a0665184c73066d9aea83dc1ca6c748e434eb138f6cdd6123bdb9244889eb306 Started:0xc001eb6a40 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001d346a0} {Name:kube-api-access-w69mj MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001d346b0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1213 19:03:28.218253   20291 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-w7p7w" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.226527   20291 pod_ready.go:93] pod "coredns-7c65d6cfc9-w7p7w" in "kube-system" namespace has status "Ready":"True"
	I1213 19:03:28.226554   20291 pod_ready.go:82] duration metric: took 8.29183ms for pod "coredns-7c65d6cfc9-w7p7w" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.226568   20291 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-649719" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.239584   20291 pod_ready.go:93] pod "etcd-addons-649719" in "kube-system" namespace has status "Ready":"True"
	I1213 19:03:28.239608   20291 pod_ready.go:82] duration metric: took 13.032083ms for pod "etcd-addons-649719" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.239619   20291 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-649719" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.246836   20291 pod_ready.go:93] pod "kube-apiserver-addons-649719" in "kube-system" namespace has status "Ready":"True"
	I1213 19:03:28.246873   20291 pod_ready.go:82] duration metric: took 7.245365ms for pod "kube-apiserver-addons-649719" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.246886   20291 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-649719" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.252300   20291 pod_ready.go:93] pod "kube-controller-manager-addons-649719" in "kube-system" namespace has status "Ready":"True"
	I1213 19:03:28.252327   20291 pod_ready.go:82] duration metric: took 5.433009ms for pod "kube-controller-manager-addons-649719" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.252342   20291 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zhqf7" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.355877   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:28.429537   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:28.431706   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:28.614723   20291 pod_ready.go:93] pod "kube-proxy-zhqf7" in "kube-system" namespace has status "Ready":"True"
	I1213 19:03:28.614745   20291 pod_ready.go:82] duration metric: took 362.396016ms for pod "kube-proxy-zhqf7" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.614753   20291 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-649719" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.704774   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:28.852912   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:28.929233   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:28.929880   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:29.014770   20291 pod_ready.go:93] pod "kube-scheduler-addons-649719" in "kube-system" namespace has status "Ready":"True"
	I1213 19:03:29.014800   20291 pod_ready.go:82] duration metric: took 400.038737ms for pod "kube-scheduler-addons-649719" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:29.014810   20291 pod_ready.go:39] duration metric: took 10.81787256s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 19:03:29.014826   20291 api_server.go:52] waiting for apiserver process to appear ...
	I1213 19:03:29.014904   20291 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:03:29.060144   20291 api_server.go:72] duration metric: took 12.548000761s to wait for apiserver process to appear ...
	I1213 19:03:29.060171   20291 api_server.go:88] waiting for apiserver healthz status ...
	I1213 19:03:29.060195   20291 api_server.go:253] Checking apiserver healthz at https://192.168.39.191:8443/healthz ...
	I1213 19:03:29.064866   20291 api_server.go:279] https://192.168.39.191:8443/healthz returned 200:
	ok
	I1213 19:03:29.065804   20291 api_server.go:141] control plane version: v1.31.2
	I1213 19:03:29.065824   20291 api_server.go:131] duration metric: took 5.64588ms to wait for apiserver health ...
	I1213 19:03:29.065832   20291 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 19:03:29.205325   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:29.219040   20291 system_pods.go:59] 18 kube-system pods found
	I1213 19:03:29.219073   20291 system_pods.go:61] "amd-gpu-device-plugin-pwrjv" [8cd61049-3892-4422-bb65-27b37c47bafb] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 19:03:29.219079   20291 system_pods.go:61] "coredns-7c65d6cfc9-w7p7w" [7ff9e37e-de38-4caa-b342-bd85b02357c1] Running
	I1213 19:03:29.219086   20291 system_pods.go:61] "csi-hostpath-attacher-0" [1fbc15fc-5d42-41f9-8790-47e42f716cc5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 19:03:29.219092   20291 system_pods.go:61] "csi-hostpath-resizer-0" [9331abab-a969-497c-a8ee-a6eb8d49d647] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 19:03:29.219100   20291 system_pods.go:61] "csi-hostpathplugin-zrvnk" [3e44db57-e7a0-4ad7-846c-6f034b87d938] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 19:03:29.219106   20291 system_pods.go:61] "etcd-addons-649719" [c50e5927-a88a-4246-9cac-d92cd80c8dc4] Running
	I1213 19:03:29.219109   20291 system_pods.go:61] "kube-apiserver-addons-649719" [a0d02add-130d-4c4b-9785-d22944023899] Running
	I1213 19:03:29.219113   20291 system_pods.go:61] "kube-controller-manager-addons-649719" [0f06f930-787a-4b89-9d21-62047d0ff6c9] Running
	I1213 19:03:29.219119   20291 system_pods.go:61] "kube-ingress-dns-minikube" [e406783b-1c28-4447-81fd-72cb0ef3b306] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 19:03:29.219123   20291 system_pods.go:61] "kube-proxy-zhqf7" [17cc9d6e-fee4-451f-a0d8-91ebf081f894] Running
	I1213 19:03:29.219127   20291 system_pods.go:61] "kube-scheduler-addons-649719" [d43e44ed-30af-4612-a992-3added273b60] Running
	I1213 19:03:29.219131   20291 system_pods.go:61] "metrics-server-84c5f94fbc-m8bmq" [19020284-7a06-4b3e-af82-964b038c6aea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 19:03:29.219138   20291 system_pods.go:61] "nvidia-device-plugin-daemonset-7scc7" [9ac38625-793e-41f6-85f0-ceb6f87c9f02] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 19:03:29.219147   20291 system_pods.go:61] "registry-5cc95cd69-pj78t" [ce97be6a-8047-4747-a0f2-aa19bd1ffd4e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 19:03:29.219152   20291 system_pods.go:61] "registry-proxy-q8msp" [831a22d5-3f2d-460b-a739-1e316400aebc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 19:03:29.219160   20291 system_pods.go:61] "snapshot-controller-56fcc65765-qddnd" [f8de8150-1a12-4a3a-9e2f-19b427174422] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 19:03:29.219165   20291 system_pods.go:61] "snapshot-controller-56fcc65765-zchf9" [d9385680-6ee6-4cd9-ab58-c0ab8290ac77] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 19:03:29.219169   20291 system_pods.go:61] "storage-provisioner" [bfe88593-e74e-4b8a-841d-81f2488dc9b4] Running
	I1213 19:03:29.219175   20291 system_pods.go:74] duration metric: took 153.338369ms to wait for pod list to return data ...
	I1213 19:03:29.219184   20291 default_sa.go:34] waiting for default service account to be created ...
	I1213 19:03:29.352719   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:29.414680   20291 default_sa.go:45] found service account: "default"
	I1213 19:03:29.414702   20291 default_sa.go:55] duration metric: took 195.512097ms for default service account to be created ...
	I1213 19:03:29.414710   20291 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 19:03:29.431017   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:29.431610   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:29.621084   20291 system_pods.go:86] 18 kube-system pods found
	I1213 19:03:29.621117   20291 system_pods.go:89] "amd-gpu-device-plugin-pwrjv" [8cd61049-3892-4422-bb65-27b37c47bafb] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 19:03:29.621124   20291 system_pods.go:89] "coredns-7c65d6cfc9-w7p7w" [7ff9e37e-de38-4caa-b342-bd85b02357c1] Running
	I1213 19:03:29.621131   20291 system_pods.go:89] "csi-hostpath-attacher-0" [1fbc15fc-5d42-41f9-8790-47e42f716cc5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 19:03:29.621136   20291 system_pods.go:89] "csi-hostpath-resizer-0" [9331abab-a969-497c-a8ee-a6eb8d49d647] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 19:03:29.621143   20291 system_pods.go:89] "csi-hostpathplugin-zrvnk" [3e44db57-e7a0-4ad7-846c-6f034b87d938] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 19:03:29.621147   20291 system_pods.go:89] "etcd-addons-649719" [c50e5927-a88a-4246-9cac-d92cd80c8dc4] Running
	I1213 19:03:29.621152   20291 system_pods.go:89] "kube-apiserver-addons-649719" [a0d02add-130d-4c4b-9785-d22944023899] Running
	I1213 19:03:29.621156   20291 system_pods.go:89] "kube-controller-manager-addons-649719" [0f06f930-787a-4b89-9d21-62047d0ff6c9] Running
	I1213 19:03:29.621164   20291 system_pods.go:89] "kube-ingress-dns-minikube" [e406783b-1c28-4447-81fd-72cb0ef3b306] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 19:03:29.621168   20291 system_pods.go:89] "kube-proxy-zhqf7" [17cc9d6e-fee4-451f-a0d8-91ebf081f894] Running
	I1213 19:03:29.621175   20291 system_pods.go:89] "kube-scheduler-addons-649719" [d43e44ed-30af-4612-a992-3added273b60] Running
	I1213 19:03:29.621180   20291 system_pods.go:89] "metrics-server-84c5f94fbc-m8bmq" [19020284-7a06-4b3e-af82-964b038c6aea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 19:03:29.621186   20291 system_pods.go:89] "nvidia-device-plugin-daemonset-7scc7" [9ac38625-793e-41f6-85f0-ceb6f87c9f02] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 19:03:29.621197   20291 system_pods.go:89] "registry-5cc95cd69-pj78t" [ce97be6a-8047-4747-a0f2-aa19bd1ffd4e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 19:03:29.621205   20291 system_pods.go:89] "registry-proxy-q8msp" [831a22d5-3f2d-460b-a739-1e316400aebc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 19:03:29.621209   20291 system_pods.go:89] "snapshot-controller-56fcc65765-qddnd" [f8de8150-1a12-4a3a-9e2f-19b427174422] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 19:03:29.621215   20291 system_pods.go:89] "snapshot-controller-56fcc65765-zchf9" [d9385680-6ee6-4cd9-ab58-c0ab8290ac77] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 19:03:29.621219   20291 system_pods.go:89] "storage-provisioner" [bfe88593-e74e-4b8a-841d-81f2488dc9b4] Running
	I1213 19:03:29.621228   20291 system_pods.go:126] duration metric: took 206.513579ms to wait for k8s-apps to be running ...
	I1213 19:03:29.621235   20291 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 19:03:29.621274   20291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:03:29.638401   20291 system_svc.go:56] duration metric: took 17.154974ms WaitForService to wait for kubelet
	I1213 19:03:29.638429   20291 kubeadm.go:582] duration metric: took 13.126290634s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 19:03:29.638451   20291 node_conditions.go:102] verifying NodePressure condition ...
	I1213 19:03:29.716193   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:29.815545   20291 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 19:03:29.815574   20291 node_conditions.go:123] node cpu capacity is 2
	I1213 19:03:29.815588   20291 node_conditions.go:105] duration metric: took 177.131475ms to run NodePressure ...
	I1213 19:03:29.815600   20291 start.go:241] waiting for startup goroutines ...
	I1213 19:03:29.853655   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:29.929025   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:29.929380   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:30.204405   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:30.353515   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:30.428680   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:30.429085   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:30.705401   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:30.854659   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:30.928907   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:30.929497   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:31.204748   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:31.352740   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:31.431042   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:31.431332   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:31.704901   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:31.853787   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:31.929442   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:31.929569   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:32.204505   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:32.353587   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:32.429639   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:32.429868   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:32.703964   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:32.853068   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:32.928701   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:32.930034   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:33.205131   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:33.353750   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:33.431697   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:33.432480   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:33.704358   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:33.852933   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:33.928444   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:33.931163   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:34.204961   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:34.352648   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:34.429576   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:34.429821   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:34.703937   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:34.852800   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:34.928895   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:34.929605   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:35.309289   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:35.460014   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:35.460236   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:35.460777   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:35.705223   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:35.852601   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:35.929060   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:35.929404   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:36.204563   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:36.354011   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:36.429619   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:36.430429   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:36.704934   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:36.852435   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:36.929689   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:36.931031   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:37.204225   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:37.353838   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:37.430278   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:37.430484   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:37.706395   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:37.855542   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:37.929713   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:37.930012   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:38.204362   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:38.352969   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:38.428670   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:38.428678   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:38.703870   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:38.852624   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:38.928356   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:38.929194   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:39.204915   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:39.353394   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:39.428683   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:39.429344   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:39.704926   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:39.852875   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:39.928254   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:39.928566   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:40.205014   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:40.354299   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:40.428562   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:40.429077   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:40.704457   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:40.853312   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:40.929719   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:40.930291   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:41.204745   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:41.352651   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:41.429023   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:41.429449   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:41.705351   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:41.853796   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:41.929452   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:41.929797   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:42.204325   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:42.353008   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:42.429817   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:42.430569   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:42.704832   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:42.853998   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:42.928799   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:42.930343   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:43.205953   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:43.352411   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:43.429245   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:43.430574   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:43.704410   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:43.853482   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:43.929657   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:43.930104   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:44.204302   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:44.353358   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:44.428706   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:44.430358   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:44.704632   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:44.853684   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:44.929008   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:44.929453   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:45.204218   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:45.643538   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:45.644545   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:45.649803   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:45.705985   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:45.853884   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:45.931126   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:45.931414   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:46.204818   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:46.353083   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:46.430132   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:46.430198   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:46.705054   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:46.852775   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:46.929060   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:46.929102   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:47.204940   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:47.352731   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:47.428840   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:47.429316   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:47.704101   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:47.853260   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:47.928740   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:47.929645   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:48.204757   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:48.353672   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:48.428596   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:48.431043   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:48.704638   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:48.853663   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:48.930394   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:48.931103   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:49.204565   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:49.353483   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:49.428578   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:49.430147   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:49.706075   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:49.854180   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:49.929099   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:49.929428   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:50.205568   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:50.352905   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:50.428901   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:50.430052   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:50.704509   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:50.853413   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:50.928768   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:50.928825   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:51.204561   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:51.353452   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:51.429436   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:51.429953   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:51.704275   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:51.853490   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:51.928912   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:51.929823   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:52.205401   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:52.353085   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:52.428695   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:52.429096   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:52.704274   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:52.853809   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:52.929880   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:52.930129   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:53.204975   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:53.352717   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:53.428148   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:53.428375   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:53.707298   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:53.856725   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:53.929339   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:53.930593   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:54.205197   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:54.353092   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:54.429660   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:54.430177   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:54.704800   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:54.852574   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:54.928634   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:54.929087   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:55.206381   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:55.354793   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:55.428418   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:55.428578   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:55.705130   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:55.853121   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:55.928370   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:55.929162   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:56.204983   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:56.353614   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:56.429359   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:56.429897   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:56.704149   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:56.853799   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:56.929727   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:56.930487   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:57.204994   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:57.353069   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:57.428942   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:57.429855   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:57.704972   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:57.853146   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:57.953215   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:57.953505   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:58.205585   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:58.353176   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:58.428624   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:58.428947   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:58.704796   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:58.852508   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:58.928885   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:58.929305   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:59.205083   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:59.352902   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:59.428587   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:59.428874   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:59.703999   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:59.852457   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:59.929074   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:59.929798   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:00.204729   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:00.353890   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:00.428990   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:04:00.429427   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:00.704677   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:00.852579   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:00.928334   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:04:00.930076   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:01.204890   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:01.352754   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:01.428940   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:04:01.429118   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:01.704369   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:01.854832   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:01.928884   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:04:01.929129   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:02.205123   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:02.352941   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:02.430899   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:02.431373   20291 kapi.go:107] duration metric: took 38.006613146s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 19:04:02.704871   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:02.852461   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:02.929473   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:03.205321   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:03.353036   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:03.428662   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:03.704632   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:03.853967   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:03.928997   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:04.204623   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:04.356198   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:04.429884   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:04.705534   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:04.853847   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:04.928999   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:05.205019   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:05.781493   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:05.781576   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:05.782625   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:05.853853   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:05.929468   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:06.205471   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:06.355977   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:06.454052   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:06.705025   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:06.853507   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:06.929078   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:07.224973   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:07.352959   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:07.429579   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:07.704772   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:07.854015   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:07.929641   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:08.204967   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:08.353702   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:08.428592   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:08.704617   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:08.853819   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:08.929383   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:09.204491   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:09.353171   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:09.428670   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:09.704671   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:09.854712   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:09.929110   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:10.204767   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:10.355626   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:10.428705   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:10.704035   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:10.852699   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:10.929001   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:11.204619   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:11.354010   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:11.429061   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:11.704704   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:11.853527   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:11.928548   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:12.205119   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:12.352764   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:12.428501   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:12.704510   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:12.853367   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:12.928995   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:13.204438   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:13.354569   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:13.429031   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:13.704459   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:13.853337   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:13.928475   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:14.205040   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:14.352611   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:14.428946   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:14.704135   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:14.853727   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:14.929076   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:15.205955   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:15.822396   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:15.823208   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:15.823715   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:15.855388   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:15.929585   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:16.205103   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:16.354553   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:16.431323   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:16.705745   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:16.854287   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:16.929088   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:17.204844   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:17.354155   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:17.429122   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:17.704053   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:17.852649   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:18.096819   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:18.204895   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:18.354568   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:18.455992   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:18.705302   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:18.853176   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:18.928955   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:19.205180   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:19.353256   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:19.429180   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:19.704775   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:19.853578   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:19.930799   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:20.204460   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:20.353301   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:20.429206   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:20.704678   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:20.853470   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:20.929112   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:21.204641   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:21.371782   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:21.433678   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:21.705318   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:21.853119   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:21.929488   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:22.205093   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:22.354577   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:22.429028   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:22.704318   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:22.852824   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:22.928929   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:23.205218   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:23.355097   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:23.566314   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:23.704927   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:23.852935   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:23.953057   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:24.205024   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:24.352747   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:24.430749   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:24.704910   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:24.853368   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:24.929305   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:25.205125   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:25.353540   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:25.431622   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:25.705428   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:25.853339   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:25.928840   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:26.204501   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:26.353533   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:26.432198   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:26.705249   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:26.853655   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:26.930063   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:27.204572   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:27.353601   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:27.429676   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:27.704282   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:27.852708   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:27.930310   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:28.205536   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:28.353754   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:28.428505   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:28.705659   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:28.857318   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:28.958509   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:29.224252   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:29.353307   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:29.429186   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:29.704895   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:29.852288   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:29.929233   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:30.205075   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:30.353331   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:30.429323   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:31.052660   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:31.053507   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:31.054326   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:31.205167   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:31.352809   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:31.428504   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:31.705513   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:31.853129   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:31.928814   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:32.205023   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:32.352829   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:32.428668   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:32.704853   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:32.852935   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:32.953564   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:33.205276   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:33.353407   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:33.429280   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:33.848765   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:33.857295   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:33.928908   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:34.204187   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:34.353465   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:34.429186   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:34.704562   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:34.853494   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:34.929653   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:35.207370   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:35.356844   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:35.428879   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:35.711480   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:35.853291   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:35.929071   20291 kapi.go:107] duration metric: took 1m11.504230916s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 19:04:36.211009   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:36.358778   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:36.704201   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:37.065031   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:37.260292   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:37.362693   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:37.705491   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:37.853639   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:38.205111   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:38.353597   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:38.704528   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:38.862910   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:39.204175   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:39.358321   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:39.704838   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:39.852755   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:40.204654   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:40.353224   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:40.704675   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:40.854916   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:41.204668   20291 kapi.go:107] duration metric: took 1m14.003465191s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 19:04:41.206366   20291 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-649719 cluster.
	I1213 19:04:41.207678   20291 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 19:04:41.208809   20291 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 19:04:41.361895   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:41.854462   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:42.352834   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:42.854643   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:43.353840   20291 kapi.go:107] duration metric: took 1m17.505280005s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 19:04:43.355778   20291 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, metrics-server, inspektor-gadget, nvidia-device-plugin, ingress-dns, storage-provisioner, amd-gpu-device-plugin, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1213 19:04:43.357115   20291 addons.go:510] duration metric: took 1m26.844938547s for enable addons: enabled=[cloud-spanner default-storageclass metrics-server inspektor-gadget nvidia-device-plugin ingress-dns storage-provisioner amd-gpu-device-plugin yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1213 19:04:43.357155   20291 start.go:246] waiting for cluster config update ...
	I1213 19:04:43.357172   20291 start.go:255] writing updated cluster config ...
	I1213 19:04:43.357408   20291 ssh_runner.go:195] Run: rm -f paused
	I1213 19:04:43.406100   20291 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1213 19:04:43.407774   20291 out.go:177] * Done! kubectl is now configured to use "addons-649719" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.405840324Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116901405821605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=768d987a-de87-46e6-b84c-30f6ca7a54f2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.406663442Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d63ab4a9-5eea-4d2b-bc86-b3158718a2b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.406714236Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d63ab4a9-5eea-4d2b-bc86-b3158718a2b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.406986447Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c0ee3dc60bbdb5838d1bbac92c7409bbc6e77896db6b34a0d39fa9429ad801a,PodSandboxId:e84f15dce45f3b4fe90021792f3ae6d66ffc03599cfbe52a7881afd8d8ad2fee,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734116762758977686,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 80f99f80-07c5-4365-88c6-8a2b2e3453d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56d75527174b7eb2f5998810ab9794fa411781720ad144aaae93590e2d9b60ab,PodSandboxId:912ee752dd607f17d0fbe349f82b564c64180406a060ed91e7bf5e3b0a4edc91,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734116687516845729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82b39ce9-4061-4ed5-bc86-ef917d598ff0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4fa552878e9291a70b4d3a5e46bd6895883e5ca595e57638bfa081fd1f907a,PodSandboxId:adfc92cafbf47b582ac0ce460b58cb4a80813fd8cb206acddddb5d0e094fb946,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1734116677117253135,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mfdlx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 261ee8e9-fc33-483c-b9ba-2c8704733f1f,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:892306795b693da548e0d555b943629238a62639841bd4917e866859d89d9537,PodSandboxId:1d8cb215481ea0c12f288b3a06b1804444ac15beadd5327daeb7aef6e9808390,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1734116675118915635,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-k6775,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d0731474-a9b0-4d57-966b-3505effb43cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee9
4,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:152ee656efd053bd96f592e083494f3bba4b1053b67d06d6c268c7a64488928b,PodSandboxId:b24bcf81ca3419fd9225d28bd76359f87886d4f041206449d71f2972b13a4a80,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1734116661611896165,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-krv74,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c4381ac3-d2b0-4eae-83a5-f1678ebd18fb,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:263fd07c67c848f707f938783c1239dab917cb84002245c3d6586cdda41a5b73,PodSandboxId:6cff6e843cc0037494e83d40dc3fbd64788c0e7d52beb94801df90e3a3835e56,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1734116651203894825,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-m8bmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19020284-7a06-4b3e-af82-964b038c6aea,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78979dd62cb5ca14d2efb6a774cddc05c9e6aedda81aefcd36a122957d230ee3,PodSandboxId:f5ce684e2db2b1b029ae8ad44e1fc6f13676de0774cd5f47ab724a3179e3df72,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734116637816827935,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pwrjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd61049-3892-4422-bb65-27b37c47bafb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f250431b2d17d02a949de621eb6e5d388d1e1913078539a9508988bbb558f1,PodSandboxId:ec6204fd2c325844d964485ca897af3d7fcee240fb711e995f4d5e10a6c63e82,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1734116617115204415,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e406783b-1c28-4447-81fd-72cb0ef3b306,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bdc3b6f210cbf3a0f57270b1d9331971e3412a2ccd49546898c0fa2f41551d,PodSandboxId:da97e93fd1bf4c0b1
fdf136afd0cb49ec2dbe737007fff61409dc1abcfc8f20c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734116602887220365,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe88593-e74e-4b8a-841d-81f2488dc9b4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c0a3ba6ea0fcaf473e296de0f01d60aa9cabc908c3cc23a0636c9885738e575,PodSandboxId:b0cacdc5e5b3fc26d1bc7d46ec384
3b5f6754827e32a2fb4c21714b9704dd351,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734116599447636348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w7p7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff9e37e-de38-4caa-b342-bd85b02357c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fb13faad0ab0a668c97b4d6313597c7e671faf4950d8722f7a12d14331fecb,PodSandboxId:467fc785ceb6d5cad54d5876ec2883f5185ea30522a4d1500821be277d589664,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1734116597412211709,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhqf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17cc9d6e-fee4-451f-a0d8-91ebf081f894,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:0533f80981943f87cc95d251a38f03699f36e780158a22dcee9a832187925fd4,PodSandboxId:29262b8325138176a141fb98030d21abff1d8cc2d10d37b924138e10077ec4ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1734116586352035263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58dc36838bbf299e1c66f9ab610eaa1b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:ce65a54464d909b9d7341a915c84ab188c43d7a34e17ea9d7112a0db0b2089e6,PodSandboxId:58dd99b1ffe8ca22e5e63a79b26e6e57459eb7cbf65522a26933fdafcfc2e0c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1734116586356886213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764466eb2bbb72ea386c13d5dd92f164,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0d6029c
a6340f2987f735fb5297cbfae0572cf198d2ad19a0cec9a347e6ca5,PodSandboxId:40e4411e40c5ac08b95a632d29da6b319fdca424a2bafc56d40682893aca1869,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1734116586347834719,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7b3c4228efb62a65516b8dd00f1b04,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d9cac40167e3d15a7415dfbb
79a5cf0eac1d9cb167d02a0a7196ddb02af395,PodSandboxId:0b1f2f7a4f9e9d60fac9e7a4e0cac8c6cf9edcc960b528e8f3a60ceca99595be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1734116586137750665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1507a6fd303286ee1ff706e9946458b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=d63ab4a9-5eea-4d2b-bc86-b3158718a2b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.446989084Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b9b008c-e519-43e5-910d-eea383f5c9ed name=/runtime.v1.RuntimeService/Version
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.447057862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b9b008c-e519-43e5-910d-eea383f5c9ed name=/runtime.v1.RuntimeService/Version
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.448553897Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=297beb54-6d44-4d13-bb32-fc9deffdda39 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.449665105Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116901449643168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=297beb54-6d44-4d13-bb32-fc9deffdda39 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.450191379Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e40ba34-8a42-4b1c-9f42-526601c37a55 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.450246523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e40ba34-8a42-4b1c-9f42-526601c37a55 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.450664242Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c0ee3dc60bbdb5838d1bbac92c7409bbc6e77896db6b34a0d39fa9429ad801a,PodSandboxId:e84f15dce45f3b4fe90021792f3ae6d66ffc03599cfbe52a7881afd8d8ad2fee,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734116762758977686,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 80f99f80-07c5-4365-88c6-8a2b2e3453d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56d75527174b7eb2f5998810ab9794fa411781720ad144aaae93590e2d9b60ab,PodSandboxId:912ee752dd607f17d0fbe349f82b564c64180406a060ed91e7bf5e3b0a4edc91,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734116687516845729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82b39ce9-4061-4ed5-bc86-ef917d598ff0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4fa552878e9291a70b4d3a5e46bd6895883e5ca595e57638bfa081fd1f907a,PodSandboxId:adfc92cafbf47b582ac0ce460b58cb4a80813fd8cb206acddddb5d0e094fb946,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1734116677117253135,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mfdlx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 261ee8e9-fc33-483c-b9ba-2c8704733f1f,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:892306795b693da548e0d555b943629238a62639841bd4917e866859d89d9537,PodSandboxId:1d8cb215481ea0c12f288b3a06b1804444ac15beadd5327daeb7aef6e9808390,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1734116675118915635,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-k6775,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d0731474-a9b0-4d57-966b-3505effb43cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee9
4,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:152ee656efd053bd96f592e083494f3bba4b1053b67d06d6c268c7a64488928b,PodSandboxId:b24bcf81ca3419fd9225d28bd76359f87886d4f041206449d71f2972b13a4a80,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1734116661611896165,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-krv74,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c4381ac3-d2b0-4eae-83a5-f1678ebd18fb,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:263fd07c67c848f707f938783c1239dab917cb84002245c3d6586cdda41a5b73,PodSandboxId:6cff6e843cc0037494e83d40dc3fbd64788c0e7d52beb94801df90e3a3835e56,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1734116651203894825,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-m8bmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19020284-7a06-4b3e-af82-964b038c6aea,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78979dd62cb5ca14d2efb6a774cddc05c9e6aedda81aefcd36a122957d230ee3,PodSandboxId:f5ce684e2db2b1b029ae8ad44e1fc6f13676de0774cd5f47ab724a3179e3df72,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734116637816827935,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pwrjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd61049-3892-4422-bb65-27b37c47bafb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f250431b2d17d02a949de621eb6e5d388d1e1913078539a9508988bbb558f1,PodSandboxId:ec6204fd2c325844d964485ca897af3d7fcee240fb711e995f4d5e10a6c63e82,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1734116617115204415,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e406783b-1c28-4447-81fd-72cb0ef3b306,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bdc3b6f210cbf3a0f57270b1d9331971e3412a2ccd49546898c0fa2f41551d,PodSandboxId:da97e93fd1bf4c0b1
fdf136afd0cb49ec2dbe737007fff61409dc1abcfc8f20c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734116602887220365,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe88593-e74e-4b8a-841d-81f2488dc9b4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c0a3ba6ea0fcaf473e296de0f01d60aa9cabc908c3cc23a0636c9885738e575,PodSandboxId:b0cacdc5e5b3fc26d1bc7d46ec384
3b5f6754827e32a2fb4c21714b9704dd351,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734116599447636348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w7p7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff9e37e-de38-4caa-b342-bd85b02357c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fb13faad0ab0a668c97b4d6313597c7e671faf4950d8722f7a12d14331fecb,PodSandboxId:467fc785ceb6d5cad54d5876ec2883f5185ea30522a4d1500821be277d589664,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1734116597412211709,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhqf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17cc9d6e-fee4-451f-a0d8-91ebf081f894,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:0533f80981943f87cc95d251a38f03699f36e780158a22dcee9a832187925fd4,PodSandboxId:29262b8325138176a141fb98030d21abff1d8cc2d10d37b924138e10077ec4ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1734116586352035263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58dc36838bbf299e1c66f9ab610eaa1b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:ce65a54464d909b9d7341a915c84ab188c43d7a34e17ea9d7112a0db0b2089e6,PodSandboxId:58dd99b1ffe8ca22e5e63a79b26e6e57459eb7cbf65522a26933fdafcfc2e0c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1734116586356886213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764466eb2bbb72ea386c13d5dd92f164,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0d6029c
a6340f2987f735fb5297cbfae0572cf198d2ad19a0cec9a347e6ca5,PodSandboxId:40e4411e40c5ac08b95a632d29da6b319fdca424a2bafc56d40682893aca1869,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1734116586347834719,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7b3c4228efb62a65516b8dd00f1b04,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d9cac40167e3d15a7415dfbb
79a5cf0eac1d9cb167d02a0a7196ddb02af395,PodSandboxId:0b1f2f7a4f9e9d60fac9e7a4e0cac8c6cf9edcc960b528e8f3a60ceca99595be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1734116586137750665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1507a6fd303286ee1ff706e9946458b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=2e40ba34-8a42-4b1c-9f42-526601c37a55 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.484831050Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d2dfd5c-c31e-43e5-a8f2-52dba8e666b0 name=/runtime.v1.RuntimeService/Version
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.484908919Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d2dfd5c-c31e-43e5-a8f2-52dba8e666b0 name=/runtime.v1.RuntimeService/Version
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.485920349Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aecb4bba-f809-495d-9af6-df953fac2218 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.487157981Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116901487132305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aecb4bba-f809-495d-9af6-df953fac2218 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.487691455Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0928c05c-3f07-4775-8a66-c4a7e02c6439 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.487765024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0928c05c-3f07-4775-8a66-c4a7e02c6439 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.488082659Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c0ee3dc60bbdb5838d1bbac92c7409bbc6e77896db6b34a0d39fa9429ad801a,PodSandboxId:e84f15dce45f3b4fe90021792f3ae6d66ffc03599cfbe52a7881afd8d8ad2fee,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734116762758977686,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 80f99f80-07c5-4365-88c6-8a2b2e3453d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56d75527174b7eb2f5998810ab9794fa411781720ad144aaae93590e2d9b60ab,PodSandboxId:912ee752dd607f17d0fbe349f82b564c64180406a060ed91e7bf5e3b0a4edc91,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734116687516845729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82b39ce9-4061-4ed5-bc86-ef917d598ff0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4fa552878e9291a70b4d3a5e46bd6895883e5ca595e57638bfa081fd1f907a,PodSandboxId:adfc92cafbf47b582ac0ce460b58cb4a80813fd8cb206acddddb5d0e094fb946,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1734116677117253135,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mfdlx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 261ee8e9-fc33-483c-b9ba-2c8704733f1f,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:892306795b693da548e0d555b943629238a62639841bd4917e866859d89d9537,PodSandboxId:1d8cb215481ea0c12f288b3a06b1804444ac15beadd5327daeb7aef6e9808390,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1734116675118915635,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-k6775,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d0731474-a9b0-4d57-966b-3505effb43cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee9
4,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:152ee656efd053bd96f592e083494f3bba4b1053b67d06d6c268c7a64488928b,PodSandboxId:b24bcf81ca3419fd9225d28bd76359f87886d4f041206449d71f2972b13a4a80,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1734116661611896165,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-krv74,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c4381ac3-d2b0-4eae-83a5-f1678ebd18fb,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:263fd07c67c848f707f938783c1239dab917cb84002245c3d6586cdda41a5b73,PodSandboxId:6cff6e843cc0037494e83d40dc3fbd64788c0e7d52beb94801df90e3a3835e56,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1734116651203894825,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-m8bmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19020284-7a06-4b3e-af82-964b038c6aea,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78979dd62cb5ca14d2efb6a774cddc05c9e6aedda81aefcd36a122957d230ee3,PodSandboxId:f5ce684e2db2b1b029ae8ad44e1fc6f13676de0774cd5f47ab724a3179e3df72,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734116637816827935,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pwrjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd61049-3892-4422-bb65-27b37c47bafb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f250431b2d17d02a949de621eb6e5d388d1e1913078539a9508988bbb558f1,PodSandboxId:ec6204fd2c325844d964485ca897af3d7fcee240fb711e995f4d5e10a6c63e82,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1734116617115204415,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e406783b-1c28-4447-81fd-72cb0ef3b306,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bdc3b6f210cbf3a0f57270b1d9331971e3412a2ccd49546898c0fa2f41551d,PodSandboxId:da97e93fd1bf4c0b1
fdf136afd0cb49ec2dbe737007fff61409dc1abcfc8f20c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734116602887220365,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe88593-e74e-4b8a-841d-81f2488dc9b4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c0a3ba6ea0fcaf473e296de0f01d60aa9cabc908c3cc23a0636c9885738e575,PodSandboxId:b0cacdc5e5b3fc26d1bc7d46ec384
3b5f6754827e32a2fb4c21714b9704dd351,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734116599447636348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w7p7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff9e37e-de38-4caa-b342-bd85b02357c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fb13faad0ab0a668c97b4d6313597c7e671faf4950d8722f7a12d14331fecb,PodSandboxId:467fc785ceb6d5cad54d5876ec2883f5185ea30522a4d1500821be277d589664,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1734116597412211709,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhqf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17cc9d6e-fee4-451f-a0d8-91ebf081f894,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:0533f80981943f87cc95d251a38f03699f36e780158a22dcee9a832187925fd4,PodSandboxId:29262b8325138176a141fb98030d21abff1d8cc2d10d37b924138e10077ec4ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1734116586352035263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58dc36838bbf299e1c66f9ab610eaa1b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:ce65a54464d909b9d7341a915c84ab188c43d7a34e17ea9d7112a0db0b2089e6,PodSandboxId:58dd99b1ffe8ca22e5e63a79b26e6e57459eb7cbf65522a26933fdafcfc2e0c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1734116586356886213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764466eb2bbb72ea386c13d5dd92f164,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0d6029c
a6340f2987f735fb5297cbfae0572cf198d2ad19a0cec9a347e6ca5,PodSandboxId:40e4411e40c5ac08b95a632d29da6b319fdca424a2bafc56d40682893aca1869,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1734116586347834719,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7b3c4228efb62a65516b8dd00f1b04,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d9cac40167e3d15a7415dfbb
79a5cf0eac1d9cb167d02a0a7196ddb02af395,PodSandboxId:0b1f2f7a4f9e9d60fac9e7a4e0cac8c6cf9edcc960b528e8f3a60ceca99595be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1734116586137750665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1507a6fd303286ee1ff706e9946458b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=0928c05c-3f07-4775-8a66-c4a7e02c6439 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.520032052Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b3397a5-b5c8-4abb-b4ac-730d2e747546 name=/runtime.v1.RuntimeService/Version
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.520098478Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b3397a5-b5c8-4abb-b4ac-730d2e747546 name=/runtime.v1.RuntimeService/Version
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.520949958Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bee1f404-3419-4db4-805e-8c570729269a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.522274049Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116901522248268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bee1f404-3419-4db4-805e-8c570729269a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.522816498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54af7221-2e36-44db-bcda-772aa0b4b810 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.522887927Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54af7221-2e36-44db-bcda-772aa0b4b810 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:08:21 addons-649719 crio[659]: time="2024-12-13 19:08:21.523172954Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c0ee3dc60bbdb5838d1bbac92c7409bbc6e77896db6b34a0d39fa9429ad801a,PodSandboxId:e84f15dce45f3b4fe90021792f3ae6d66ffc03599cfbe52a7881afd8d8ad2fee,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734116762758977686,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 80f99f80-07c5-4365-88c6-8a2b2e3453d1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56d75527174b7eb2f5998810ab9794fa411781720ad144aaae93590e2d9b60ab,PodSandboxId:912ee752dd607f17d0fbe349f82b564c64180406a060ed91e7bf5e3b0a4edc91,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734116687516845729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82b39ce9-4061-4ed5-bc86-ef917d598ff0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4fa552878e9291a70b4d3a5e46bd6895883e5ca595e57638bfa081fd1f907a,PodSandboxId:adfc92cafbf47b582ac0ce460b58cb4a80813fd8cb206acddddb5d0e094fb946,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1734116677117253135,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mfdlx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 261ee8e9-fc33-483c-b9ba-2c8704733f1f,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:892306795b693da548e0d555b943629238a62639841bd4917e866859d89d9537,PodSandboxId:1d8cb215481ea0c12f288b3a06b1804444ac15beadd5327daeb7aef6e9808390,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1734116675118915635,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-k6775,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d0731474-a9b0-4d57-966b-3505effb43cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee9
4,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:152ee656efd053bd96f592e083494f3bba4b1053b67d06d6c268c7a64488928b,PodSandboxId:b24bcf81ca3419fd9225d28bd76359f87886d4f041206449d71f2972b13a4a80,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1734116661611896165,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-krv74,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c4381ac3-d2b0-4eae-83a5-f1678ebd18fb,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:263fd07c67c848f707f938783c1239dab917cb84002245c3d6586cdda41a5b73,PodSandboxId:6cff6e843cc0037494e83d40dc3fbd64788c0e7d52beb94801df90e3a3835e56,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1734116651203894825,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-m8bmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19020284-7a06-4b3e-af82-964b038c6aea,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78979dd62cb5ca14d2efb6a774cddc05c9e6aedda81aefcd36a122957d230ee3,PodSandboxId:f5ce684e2db2b1b029ae8ad44e1fc6f13676de0774cd5f47ab724a3179e3df72,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734116637816827935,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pwrjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd61049-3892-4422-bb65-27b37c47bafb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f250431b2d17d02a949de621eb6e5d388d1e1913078539a9508988bbb558f1,PodSandboxId:ec6204fd2c325844d964485ca897af3d7fcee240fb711e995f4d5e10a6c63e82,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1734116617115204415,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e406783b-1c28-4447-81fd-72cb0ef3b306,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bdc3b6f210cbf3a0f57270b1d9331971e3412a2ccd49546898c0fa2f41551d,PodSandboxId:da97e93fd1bf4c0b1
fdf136afd0cb49ec2dbe737007fff61409dc1abcfc8f20c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734116602887220365,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe88593-e74e-4b8a-841d-81f2488dc9b4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c0a3ba6ea0fcaf473e296de0f01d60aa9cabc908c3cc23a0636c9885738e575,PodSandboxId:b0cacdc5e5b3fc26d1bc7d46ec384
3b5f6754827e32a2fb4c21714b9704dd351,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734116599447636348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w7p7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff9e37e-de38-4caa-b342-bd85b02357c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fb13faad0ab0a668c97b4d6313597c7e671faf4950d8722f7a12d14331fecb,PodSandboxId:467fc785ceb6d5cad54d5876ec2883f5185ea30522a4d1500821be277d589664,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1734116597412211709,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhqf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17cc9d6e-fee4-451f-a0d8-91ebf081f894,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:0533f80981943f87cc95d251a38f03699f36e780158a22dcee9a832187925fd4,PodSandboxId:29262b8325138176a141fb98030d21abff1d8cc2d10d37b924138e10077ec4ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1734116586352035263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58dc36838bbf299e1c66f9ab610eaa1b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:ce65a54464d909b9d7341a915c84ab188c43d7a34e17ea9d7112a0db0b2089e6,PodSandboxId:58dd99b1ffe8ca22e5e63a79b26e6e57459eb7cbf65522a26933fdafcfc2e0c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1734116586356886213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764466eb2bbb72ea386c13d5dd92f164,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0d6029c
a6340f2987f735fb5297cbfae0572cf198d2ad19a0cec9a347e6ca5,PodSandboxId:40e4411e40c5ac08b95a632d29da6b319fdca424a2bafc56d40682893aca1869,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1734116586347834719,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7b3c4228efb62a65516b8dd00f1b04,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d9cac40167e3d15a7415dfbb
79a5cf0eac1d9cb167d02a0a7196ddb02af395,PodSandboxId:0b1f2f7a4f9e9d60fac9e7a4e0cac8c6cf9edcc960b528e8f3a60ceca99595be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1734116586137750665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1507a6fd303286ee1ff706e9946458b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=54af7221-2e36-44db-bcda-772aa0b4b810 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7c0ee3dc60bbd       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                              2 minutes ago       Running             nginx                     0                   e84f15dce45f3       nginx
	56d75527174b7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   912ee752dd607       busybox
	ee4fa552878e9       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago       Exited              patch                     2                   adfc92cafbf47       ingress-nginx-admission-patch-mfdlx
	892306795b693       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   1d8cb215481ea       ingress-nginx-controller-5f85ff4588-k6775
	152ee656efd05       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   b24bcf81ca341       ingress-nginx-admission-create-krv74
	263fd07c67c84       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago       Running             metrics-server            0                   6cff6e843cc00       metrics-server-84c5f94fbc-m8bmq
	78979dd62cb5c       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   f5ce684e2db2b       amd-gpu-device-plugin-pwrjv
	34f250431b2d1       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   ec6204fd2c325       kube-ingress-dns-minikube
	c9bdc3b6f210c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   da97e93fd1bf4       storage-provisioner
	2c0a3ba6ea0fc       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             5 minutes ago       Running             coredns                   0                   b0cacdc5e5b3f       coredns-7c65d6cfc9-w7p7w
	a1fb13faad0ab       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             5 minutes ago       Running             kube-proxy                0                   467fc785ceb6d       kube-proxy-zhqf7
	ce65a54464d90       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             5 minutes ago       Running             kube-scheduler            0                   58dd99b1ffe8c       kube-scheduler-addons-649719
	0533f80981943       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   29262b8325138       etcd-addons-649719
	0f0d6029ca634       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             5 minutes ago       Running             kube-apiserver            0                   40e4411e40c5a       kube-apiserver-addons-649719
	72d9cac40167e       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             5 minutes ago       Running             kube-controller-manager   0                   0b1f2f7a4f9e9       kube-controller-manager-addons-649719
	
	
	==> coredns [2c0a3ba6ea0fcaf473e296de0f01d60aa9cabc908c3cc23a0636c9885738e575] <==
	[INFO] 10.244.0.8:55054 - 35025 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000110054s
	[INFO] 10.244.0.8:55054 - 2021 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000137898s
	[INFO] 10.244.0.8:55054 - 16985 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00008273s
	[INFO] 10.244.0.8:55054 - 26323 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000071953s
	[INFO] 10.244.0.8:55054 - 5430 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000088932s
	[INFO] 10.244.0.8:55054 - 58897 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000146928s
	[INFO] 10.244.0.8:55054 - 41379 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000095457s
	[INFO] 10.244.0.8:44789 - 61248 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00008558s
	[INFO] 10.244.0.8:44789 - 60953 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000033002s
	[INFO] 10.244.0.8:48669 - 52983 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000048827s
	[INFO] 10.244.0.8:48669 - 52527 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000029589s
	[INFO] 10.244.0.8:47413 - 50913 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041579s
	[INFO] 10.244.0.8:47413 - 50462 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027873s
	[INFO] 10.244.0.8:52196 - 50548 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000044351s
	[INFO] 10.244.0.8:52196 - 50391 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000106896s
	[INFO] 10.244.0.23:34270 - 41862 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001157605s
	[INFO] 10.244.0.23:51684 - 12429 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000211544s
	[INFO] 10.244.0.23:38067 - 16938 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000137219s
	[INFO] 10.244.0.23:43723 - 50011 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124496s
	[INFO] 10.244.0.23:51940 - 24827 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000069519s
	[INFO] 10.244.0.23:56671 - 22686 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000078912s
	[INFO] 10.244.0.23:38559 - 64108 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000670222s
	[INFO] 10.244.0.23:52821 - 22756 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001085153s
	[INFO] 10.244.0.28:51955 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000609063s
	[INFO] 10.244.0.28:56341 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000520634s
	
	
	==> describe nodes <==
	Name:               addons-649719
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-649719
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956
	                    minikube.k8s.io/name=addons-649719
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_13T19_03_11_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-649719
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Dec 2024 19:03:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-649719
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Dec 2024 19:08:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Dec 2024 19:06:15 +0000   Fri, 13 Dec 2024 19:03:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Dec 2024 19:06:15 +0000   Fri, 13 Dec 2024 19:03:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Dec 2024 19:06:15 +0000   Fri, 13 Dec 2024 19:03:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Dec 2024 19:06:15 +0000   Fri, 13 Dec 2024 19:03:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.191
	  Hostname:    addons-649719
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 ec861977a6ee432faa82b25b478a8504
	  System UUID:                ec861977-a6ee-432f-aa82-b25b478a8504
	  Boot ID:                    56bfbfda-6224-405b-9d0d-89e8546fb391
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  default                     hello-world-app-55bf9c44b4-j75hj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-k6775    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m57s
	  kube-system                 amd-gpu-device-plugin-pwrjv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 coredns-7c65d6cfc9-w7p7w                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m5s
	  kube-system                 etcd-addons-649719                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m12s
	  kube-system                 kube-apiserver-addons-649719                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-controller-manager-addons-649719        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-proxy-zhqf7                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-scheduler-addons-649719                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 metrics-server-84c5f94fbc-m8bmq              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         5m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m3s   kube-proxy       
	  Normal  Starting                 5m11s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m10s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m10s  kubelet          Node addons-649719 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m10s  kubelet          Node addons-649719 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m10s  kubelet          Node addons-649719 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m10s  kubelet          Node addons-649719 status is now: NodeReady
	  Normal  RegisteredNode           5m6s   node-controller  Node addons-649719 event: Registered Node addons-649719 in Controller
	  Normal  CIDRAssignmentFailed     5m6s   cidrAllocator    Node addons-649719 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +0.054952] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.980879] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.083104] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.779535] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	[  +0.145779] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.097100] kauditd_printk_skb: 135 callbacks suppressed
	[  +5.115071] kauditd_printk_skb: 136 callbacks suppressed
	[ +10.175203] kauditd_printk_skb: 69 callbacks suppressed
	[ +20.695745] kauditd_printk_skb: 2 callbacks suppressed
	[Dec13 19:04] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.085162] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.177172] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.635763] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.281857] kauditd_printk_skb: 30 callbacks suppressed
	[  +7.113327] kauditd_printk_skb: 16 callbacks suppressed
	[Dec13 19:05] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.019709] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.171411] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.451898] kauditd_printk_skb: 55 callbacks suppressed
	[ +11.735737] kauditd_printk_skb: 31 callbacks suppressed
	[ +10.520037] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.563295] kauditd_printk_skb: 7 callbacks suppressed
	[Dec13 19:06] kauditd_printk_skb: 27 callbacks suppressed
	[  +9.664458] kauditd_printk_skb: 9 callbacks suppressed
	[Dec13 19:08] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [0533f80981943f87cc95d251a38f03699f36e780158a22dcee9a832187925fd4] <==
	{"level":"warn","ts":"2024-12-13T19:04:31.039128Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-13T19:04:30.693319Z","time spent":"345.802438ms","remote":"127.0.0.1:40898","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-12-13T19:04:31.039253Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.151617ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:04:31.039267Z","caller":"traceutil/trace.go:171","msg":"trace[1284115171] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1069; }","duration":"295.167517ms","start":"2024-12-13T19:04:30.744094Z","end":"2024-12-13T19:04:31.039262Z","steps":["trace[1284115171] 'range keys from in-memory index tree'  (duration: 295.144984ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:04:31.039332Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.499742ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:04:31.039342Z","caller":"traceutil/trace.go:171","msg":"trace[396916360] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1069; }","duration":"198.51089ms","start":"2024-12-13T19:04:30.840828Z","end":"2024-12-13T19:04:31.039339Z","steps":["trace[396916360] 'range keys from in-memory index tree'  (duration: 198.46325ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:04:31.039531Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.567988ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:04:31.039549Z","caller":"traceutil/trace.go:171","msg":"trace[1110299860] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1069; }","duration":"122.588336ms","start":"2024-12-13T19:04:30.916956Z","end":"2024-12-13T19:04:31.039544Z","steps":["trace[1110299860] 'range keys from in-memory index tree'  (duration: 122.492188ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:04:33.832324Z","caller":"traceutil/trace.go:171","msg":"trace[81353000] transaction","detail":"{read_only:false; response_revision:1079; number_of_response:1; }","duration":"261.83289ms","start":"2024-12-13T19:04:33.570479Z","end":"2024-12-13T19:04:33.832312Z","steps":["trace[81353000] 'process raft request'  (duration: 261.526429ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:04:33.833481Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.467685ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:04:33.833528Z","caller":"traceutil/trace.go:171","msg":"trace[1146071498] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1079; }","duration":"152.567682ms","start":"2024-12-13T19:04:33.680951Z","end":"2024-12-13T19:04:33.833519Z","steps":["trace[1146071498] 'agreement among raft nodes before linearized reading'  (duration: 152.443856ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:04:33.832112Z","caller":"traceutil/trace.go:171","msg":"trace[175880336] linearizableReadLoop","detail":"{readStateIndex:1112; appliedIndex:1111; }","duration":"151.125009ms","start":"2024-12-13T19:04:33.680973Z","end":"2024-12-13T19:04:33.832098Z","steps":["trace[175880336] 'read index received'  (duration: 150.991367ms)","trace[175880336] 'applied index is now lower than readState.Index'  (duration: 133.232µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-13T19:04:33.834708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.057875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:04:33.834815Z","caller":"traceutil/trace.go:171","msg":"trace[1053988517] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1079; }","duration":"141.229163ms","start":"2024-12-13T19:04:33.693578Z","end":"2024-12-13T19:04:33.834807Z","steps":["trace[1053988517] 'agreement among raft nodes before linearized reading'  (duration: 141.040802ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:04:37.041959Z","caller":"traceutil/trace.go:171","msg":"trace[564859694] transaction","detail":"{read_only:false; response_revision:1096; number_of_response:1; }","duration":"329.347059ms","start":"2024-12-13T19:04:36.712584Z","end":"2024-12-13T19:04:37.041931Z","steps":["trace[564859694] 'process raft request'  (duration: 329.155796ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:04:37.042388Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-13T19:04:36.712561Z","time spent":"329.4448ms","remote":"127.0.0.1:40992","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":486,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:0 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:427 >> failure:<>"}
	{"level":"info","ts":"2024-12-13T19:04:37.042853Z","caller":"traceutil/trace.go:171","msg":"trace[169519707] linearizableReadLoop","detail":"{readStateIndex:1130; appliedIndex:1130; }","duration":"298.75709ms","start":"2024-12-13T19:04:36.744086Z","end":"2024-12-13T19:04:37.042843Z","steps":["trace[169519707] 'read index received'  (duration: 298.725627ms)","trace[169519707] 'applied index is now lower than readState.Index'  (duration: 30.668µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-13T19:04:37.042966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"298.867773ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:04:37.042985Z","caller":"traceutil/trace.go:171","msg":"trace[1925101956] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1096; }","duration":"298.897087ms","start":"2024-12-13T19:04:36.744082Z","end":"2024-12-13T19:04:37.042979Z","steps":["trace[1925101956] 'agreement among raft nodes before linearized reading'  (duration: 298.805831ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:04:37.044681Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.96722ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:04:37.044815Z","caller":"traceutil/trace.go:171","msg":"trace[401929220] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1097; }","duration":"204.151491ms","start":"2024-12-13T19:04:36.840655Z","end":"2024-12-13T19:04:37.044807Z","steps":["trace[401929220] 'agreement among raft nodes before linearized reading'  (duration: 203.873124ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:04:37.045195Z","caller":"traceutil/trace.go:171","msg":"trace[2006328592] transaction","detail":"{read_only:false; response_revision:1097; number_of_response:1; }","duration":"300.464134ms","start":"2024-12-13T19:04:36.744676Z","end":"2024-12-13T19:04:37.045140Z","steps":["trace[2006328592] 'process raft request'  (duration: 299.728571ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:04:37.045326Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-13T19:04:36.744660Z","time spent":"300.632769ms","remote":"127.0.0.1:40802","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":782,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-controller-5f85ff4588-k6775.1810d1ee041cd097\" mod_revision:0 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-5f85ff4588-k6775.1810d1ee041cd097\" value_size:675 lease:419560174168363166 >> failure:<>"}
	{"level":"warn","ts":"2024-12-13T19:05:19.787043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.478421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:05:19.787195Z","caller":"traceutil/trace.go:171","msg":"trace[460394353] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1346; }","duration":"105.668748ms","start":"2024-12-13T19:05:19.681497Z","end":"2024-12-13T19:05:19.787166Z","steps":["trace[460394353] 'range keys from in-memory index tree'  (duration: 105.430044ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:05:49.209046Z","caller":"traceutil/trace.go:171","msg":"trace[2078710238] transaction","detail":"{read_only:false; response_revision:1564; number_of_response:1; }","duration":"102.318653ms","start":"2024-12-13T19:05:49.106707Z","end":"2024-12-13T19:05:49.209026Z","steps":["trace[2078710238] 'process raft request'  (duration: 102.218983ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:08:21 up 5 min,  0 users,  load average: 0.48, 1.12, 0.60
	Linux addons-649719 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0f0d6029ca6340f2987f735fb5297cbfae0572cf198d2ad19a0cec9a347e6ca5] <==
	 > logger="UnhandledError"
	E1213 19:05:12.354242       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.108.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.108.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.108.150:443: connect: connection refused" logger="UnhandledError"
	E1213 19:05:12.359210       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.108.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.108.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.108.150:443: connect: connection refused" logger="UnhandledError"
	I1213 19:05:12.429727       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1213 19:05:14.047347       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.7.186"}
	I1213 19:05:40.553041       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1213 19:05:41.578348       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E1213 19:05:42.927361       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1213 19:05:57.732783       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1213 19:05:57.985110       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.171.190"}
	I1213 19:05:58.662150       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1213 19:06:13.820268       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:06:13.820462       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:06:13.852925       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:06:13.853068       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:06:13.853935       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:06:13.854034       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:06:13.867062       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:06:13.870543       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:06:13.900819       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:06:13.900855       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1213 19:06:14.854622       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1213 19:06:14.901242       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1213 19:06:14.992194       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1213 19:08:20.410869       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.96.29"}
	
	
	==> kube-controller-manager [72d9cac40167e3d15a7415dfbb79a5cf0eac1d9cb167d02a0a7196ddb02af395] <==
	E1213 19:06:35.561093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:06:50.026250       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:06:50.026561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:06:53.174196       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:06:53.174308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:06:54.708287       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:06:54.708391       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:07:04.310907       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:07:04.310969       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:07:24.678180       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:07:24.678241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:07:27.049135       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:07:27.049253       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:07:28.179011       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:07:28.179056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:07:56.273941       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:07:56.274259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:08:02.670074       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:08:02.670123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:08:17.363942       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:08:17.364037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1213 19:08:20.244520       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="35.175305ms"
	I1213 19:08:20.259711       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="15.145153ms"
	I1213 19:08:20.260192       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="27.939µs"
	I1213 19:08:20.264497       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="70.141µs"
	
	
	==> kube-proxy [a1fb13faad0ab0a668c97b4d6313597c7e671faf4950d8722f7a12d14331fecb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1213 19:03:18.022649       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1213 19:03:18.040969       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.191"]
	E1213 19:03:18.041042       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 19:03:18.276891       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1213 19:03:18.276925       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 19:03:18.276959       1 server_linux.go:169] "Using iptables Proxier"
	I1213 19:03:18.281962       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 19:03:18.282174       1 server.go:483] "Version info" version="v1.31.2"
	I1213 19:03:18.282185       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:03:18.290371       1 config.go:199] "Starting service config controller"
	I1213 19:03:18.290393       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1213 19:03:18.290410       1 config.go:105] "Starting endpoint slice config controller"
	I1213 19:03:18.290414       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1213 19:03:18.290800       1 config.go:328] "Starting node config controller"
	I1213 19:03:18.290829       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1213 19:03:18.390518       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1213 19:03:18.390530       1 shared_informer.go:320] Caches are synced for service config
	I1213 19:03:18.390989       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ce65a54464d909b9d7341a915c84ab188c43d7a34e17ea9d7112a0db0b2089e6] <==
	W1213 19:03:08.654691       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1213 19:03:08.654864       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:08.656553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 19:03:08.656602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:08.656695       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 19:03:08.656722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:08.656785       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1213 19:03:08.656848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:08.656923       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 19:03:08.657000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:09.511119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1213 19:03:09.511223       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:09.516355       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 19:03:09.516588       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:09.630508       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1213 19:03:09.630562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:09.712131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 19:03:09.712329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:09.724759       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1213 19:03:09.725634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:09.766353       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1213 19:03:09.766406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:09.778405       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1213 19:03:09.778533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1213 19:03:10.142318       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 13 19:08:20 addons-649719 kubelet[1219]: E1213 19:08:20.240175    1219 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9385680-6ee6-4cd9-ab58-c0ab8290ac77" containerName="volume-snapshot-controller"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: E1213 19:08:20.240229    1219 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="825cc24c-3c7f-41c0-bf31-fc3a40ad0573" containerName="task-pv-container"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: E1213 19:08:20.240238    1219 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e44db57-e7a0-4ad7-846c-6f034b87d938" containerName="csi-provisioner"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: E1213 19:08:20.240245    1219 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e44db57-e7a0-4ad7-846c-6f034b87d938" containerName="csi-external-health-monitor-controller"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: E1213 19:08:20.240251    1219 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e44db57-e7a0-4ad7-846c-6f034b87d938" containerName="hostpath"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: E1213 19:08:20.240256    1219 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e44db57-e7a0-4ad7-846c-6f034b87d938" containerName="liveness-probe"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: E1213 19:08:20.240266    1219 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9331abab-a969-497c-a8ee-a6eb8d49d647" containerName="csi-resizer"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: E1213 19:08:20.240272    1219 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f8de8150-1a12-4a3a-9e2f-19b427174422" containerName="volume-snapshot-controller"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: E1213 19:08:20.240277    1219 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1fbc15fc-5d42-41f9-8790-47e42f716cc5" containerName="csi-attacher"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: E1213 19:08:20.240285    1219 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e44db57-e7a0-4ad7-846c-6f034b87d938" containerName="node-driver-registrar"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: E1213 19:08:20.240292    1219 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e44db57-e7a0-4ad7-846c-6f034b87d938" containerName="csi-snapshotter"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: I1213 19:08:20.240337    1219 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8de8150-1a12-4a3a-9e2f-19b427174422" containerName="volume-snapshot-controller"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: I1213 19:08:20.240345    1219 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e44db57-e7a0-4ad7-846c-6f034b87d938" containerName="csi-snapshotter"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: I1213 19:08:20.240350    1219 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fbc15fc-5d42-41f9-8790-47e42f716cc5" containerName="csi-attacher"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: I1213 19:08:20.240355    1219 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e44db57-e7a0-4ad7-846c-6f034b87d938" containerName="csi-external-health-monitor-controller"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: I1213 19:08:20.240360    1219 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e44db57-e7a0-4ad7-846c-6f034b87d938" containerName="node-driver-registrar"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: I1213 19:08:20.240365    1219 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e44db57-e7a0-4ad7-846c-6f034b87d938" containerName="hostpath"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: I1213 19:08:20.240370    1219 memory_manager.go:354] "RemoveStaleState removing state" podUID="825cc24c-3c7f-41c0-bf31-fc3a40ad0573" containerName="task-pv-container"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: I1213 19:08:20.240374    1219 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e44db57-e7a0-4ad7-846c-6f034b87d938" containerName="liveness-probe"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: I1213 19:08:20.240379    1219 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e44db57-e7a0-4ad7-846c-6f034b87d938" containerName="csi-provisioner"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: I1213 19:08:20.240384    1219 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9385680-6ee6-4cd9-ab58-c0ab8290ac77" containerName="volume-snapshot-controller"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: I1213 19:08:20.240389    1219 memory_manager.go:354] "RemoveStaleState removing state" podUID="9331abab-a969-497c-a8ee-a6eb8d49d647" containerName="csi-resizer"
	Dec 13 19:08:20 addons-649719 kubelet[1219]: I1213 19:08:20.328944    1219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc97j\" (UniqueName: \"kubernetes.io/projected/136a8667-7817-4513-8b26-b79a9e43f9cc-kube-api-access-pc97j\") pod \"hello-world-app-55bf9c44b4-j75hj\" (UID: \"136a8667-7817-4513-8b26-b79a9e43f9cc\") " pod="default/hello-world-app-55bf9c44b4-j75hj"
	Dec 13 19:08:21 addons-649719 kubelet[1219]: E1213 19:08:21.360804    1219 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116901360551332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:08:21 addons-649719 kubelet[1219]: E1213 19:08:21.360827    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116901360551332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [c9bdc3b6f210cbf3a0f57270b1d9331971e3412a2ccd49546898c0fa2f41551d] <==
	I1213 19:03:23.346095       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 19:03:23.362073       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 19:03:23.362141       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 19:03:23.381211       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 19:03:23.381363       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-649719_ab1d4990-2777-474e-8af6-f35340671464!
	I1213 19:03:23.381405       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c96f0d72-a262-42ca-b0ef-d20683a4c492", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-649719_ab1d4990-2777-474e-8af6-f35340671464 became leader
	I1213 19:03:23.483543       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-649719_ab1d4990-2777-474e-8af6-f35340671464!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-649719 -n addons-649719
helpers_test.go:261: (dbg) Run:  kubectl --context addons-649719 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-j75hj ingress-nginx-admission-create-krv74 ingress-nginx-admission-patch-mfdlx
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-649719 describe pod hello-world-app-55bf9c44b4-j75hj ingress-nginx-admission-create-krv74 ingress-nginx-admission-patch-mfdlx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-649719 describe pod hello-world-app-55bf9c44b4-j75hj ingress-nginx-admission-create-krv74 ingress-nginx-admission-patch-mfdlx: exit status 1 (59.143208ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-j75hj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-649719/192.168.39.191
	Start Time:       Fri, 13 Dec 2024 19:08:20 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pc97j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pc97j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-j75hj to addons-649719
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-krv74" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mfdlx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-649719 describe pod hello-world-app-55bf9c44b4-j75hj ingress-nginx-admission-create-krv74 ingress-nginx-admission-patch-mfdlx: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649719 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-649719 addons disable ingress-dns --alsologtostderr -v=1: (1.195351745s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649719 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-649719 addons disable ingress --alsologtostderr -v=1: (7.652530947s)
--- FAIL: TestAddons/parallel/Ingress (153.96s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (324.78s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.416989ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-m8bmq" [19020284-7a06-4b3e-af82-964b038c6aea] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004297652s
addons_test.go:402: (dbg) Run:  kubectl --context addons-649719 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-649719 top pods -n kube-system: exit status 1 (65.630791ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pwrjv, age: 2m13.537899168s

                                                
                                                
** /stderr **
I1213 19:05:32.539545   19544 retry.go:31] will retry after 3.149418232s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-649719 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-649719 top pods -n kube-system: exit status 1 (60.243563ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pwrjv, age: 2m16.748791113s

                                                
                                                
** /stderr **
I1213 19:05:35.750520   19544 retry.go:31] will retry after 2.823824755s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-649719 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-649719 top pods -n kube-system: exit status 1 (61.105003ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pwrjv, age: 2m19.634790652s

                                                
                                                
** /stderr **
I1213 19:05:38.636454   19544 retry.go:31] will retry after 9.702199659s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-649719 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-649719 top pods -n kube-system: exit status 1 (64.202861ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pwrjv, age: 2m29.401729549s

                                                
                                                
** /stderr **
I1213 19:05:48.403553   19544 retry.go:31] will retry after 12.855480881s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-649719 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-649719 top pods -n kube-system: exit status 1 (131.004665ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pwrjv, age: 2m42.388756779s

                                                
                                                
** /stderr **
I1213 19:06:01.390389   19544 retry.go:31] will retry after 11.430406797s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-649719 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-649719 top pods -n kube-system: exit status 1 (69.64963ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pwrjv, age: 2m53.88937723s

                                                
                                                
** /stderr **
I1213 19:06:12.891090   19544 retry.go:31] will retry after 14.115790954s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-649719 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-649719 top pods -n kube-system: exit status 1 (55.475239ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pwrjv, age: 3m8.063501148s

                                                
                                                
** /stderr **
I1213 19:06:27.065393   19544 retry.go:31] will retry after 35.408897992s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-649719 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-649719 top pods -n kube-system: exit status 1 (56.248078ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pwrjv, age: 3m43.530137989s

                                                
                                                
** /stderr **
I1213 19:07:02.531857   19544 retry.go:31] will retry after 31.727691142s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-649719 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-649719 top pods -n kube-system: exit status 1 (56.146068ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pwrjv, age: 4m15.314301395s

                                                
                                                
** /stderr **
I1213 19:07:34.316012   19544 retry.go:31] will retry after 38.707529674s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-649719 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-649719 top pods -n kube-system: exit status 1 (54.940464ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pwrjv, age: 4m54.07706842s

                                                
                                                
** /stderr **
I1213 19:08:13.078717   19544 retry.go:31] will retry after 39.919156199s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-649719 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-649719 top pods -n kube-system: exit status 1 (57.06984ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pwrjv, age: 5m34.053571773s

                                                
                                                
** /stderr **
I1213 19:08:53.055317   19544 retry.go:31] will retry after 33.764637017s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-649719 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-649719 top pods -n kube-system: exit status 1 (55.464653ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pwrjv, age: 6m7.876309847s

                                                
                                                
** /stderr **
I1213 19:09:26.877983   19544 retry.go:31] will retry after 1m21.889485554s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-649719 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-649719 top pods -n kube-system: exit status 1 (58.664458ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pwrjv, age: 7m29.829156117s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-649719 -n addons-649719
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-649719 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-649719 logs -n 25: (1.122178495s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-541042                                                                     | download-only-541042 | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC | 13 Dec 24 19:02 UTC |
	| delete  | -p download-only-202348                                                                     | download-only-202348 | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC | 13 Dec 24 19:02 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-148435 | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC |                     |
	|         | binary-mirror-148435                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44529                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-148435                                                                     | binary-mirror-148435 | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC | 13 Dec 24 19:02 UTC |
	| addons  | enable dashboard -p                                                                         | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC |                     |
	|         | addons-649719                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC |                     |
	|         | addons-649719                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-649719 --wait=true                                                                | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC | 13 Dec 24 19:04 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-649719 addons disable                                                                | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:04 UTC | 13 Dec 24 19:04 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-649719 addons disable                                                                | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:04 UTC | 13 Dec 24 19:05 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | -p addons-649719                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-649719 addons                                                                        | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-649719 addons                                                                        | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-649719 ssh cat                                                                       | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | /opt/local-path-provisioner/pvc-71c31fc0-8ce0-4c6c-8d89-dc3684024ee5_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-649719 addons disable                                                                | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:06 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-649719 ip                                                                            | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	| addons  | addons-649719 addons disable                                                                | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-649719 addons disable                                                                | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-649719 addons                                                                        | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-649719 addons disable                                                                | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:05 UTC | 13 Dec 24 19:05 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-649719 ssh curl -s                                                                   | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:06 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-649719 addons                                                                        | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:06 UTC | 13 Dec 24 19:06 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-649719 addons                                                                        | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:06 UTC | 13 Dec 24 19:06 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-649719 ip                                                                            | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:08 UTC | 13 Dec 24 19:08 UTC |
	| addons  | addons-649719 addons disable                                                                | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:08 UTC | 13 Dec 24 19:08 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-649719 addons disable                                                                | addons-649719        | jenkins | v1.34.0 | 13 Dec 24 19:08 UTC | 13 Dec 24 19:08 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 19:02:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 19:02:30.144524   20291 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:02:30.144742   20291 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:02:30.144750   20291 out.go:358] Setting ErrFile to fd 2...
	I1213 19:02:30.144754   20291 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:02:30.144930   20291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
	I1213 19:02:30.145500   20291 out.go:352] Setting JSON to false
	I1213 19:02:30.146330   20291 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2693,"bootTime":1734113857,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 19:02:30.146387   20291 start.go:139] virtualization: kvm guest
	I1213 19:02:30.148317   20291 out.go:177] * [addons-649719] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 19:02:30.149556   20291 notify.go:220] Checking for updates...
	I1213 19:02:30.149582   20291 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 19:02:30.150973   20291 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:02:30.152093   20291 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 19:02:30.153259   20291 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 19:02:30.154324   20291 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 19:02:30.155391   20291 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:02:30.156585   20291 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 19:02:30.186528   20291 out.go:177] * Using the kvm2 driver based on user configuration
	I1213 19:02:30.187565   20291 start.go:297] selected driver: kvm2
	I1213 19:02:30.187588   20291 start.go:901] validating driver "kvm2" against <nil>
	I1213 19:02:30.187600   20291 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:02:30.188253   20291 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:02:30.188327   20291 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20090-12353/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 19:02:30.201803   20291 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1213 19:02:30.201866   20291 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 19:02:30.202150   20291 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 19:02:30.202194   20291 cni.go:84] Creating CNI manager for ""
	I1213 19:02:30.202261   20291 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 19:02:30.202271   20291 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 19:02:30.202342   20291 start.go:340] cluster config:
	{Name:addons-649719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-649719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:02:30.202471   20291 iso.go:125] acquiring lock: {Name:mkd84f6661a5214d8c2d3a40ad448351a88bfd1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:02:30.203909   20291 out.go:177] * Starting "addons-649719" primary control-plane node in "addons-649719" cluster
	I1213 19:02:30.204945   20291 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:02:30.204986   20291 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1213 19:02:30.204999   20291 cache.go:56] Caching tarball of preloaded images
	I1213 19:02:30.205084   20291 preload.go:172] Found /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 19:02:30.205098   20291 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1213 19:02:30.205615   20291 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/config.json ...
	I1213 19:02:30.205653   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/config.json: {Name:mkd6f73573a3e1c86cfde6319719ff7b523c616e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:30.205835   20291 start.go:360] acquireMachinesLock for addons-649719: {Name:mkc278ae0927dbec7538ca4f7c13001e5f3abc49 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 19:02:30.205900   20291 start.go:364] duration metric: took 46.771µs to acquireMachinesLock for "addons-649719"
	I1213 19:02:30.205929   20291 start.go:93] Provisioning new machine with config: &{Name:addons-649719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-649719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:02:30.205982   20291 start.go:125] createHost starting for "" (driver="kvm2")
	I1213 19:02:30.207434   20291 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1213 19:02:30.207573   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:02:30.207610   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:02:30.220765   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39059
	I1213 19:02:30.221144   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:02:30.221689   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:02:30.221709   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:02:30.222146   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:02:30.222325   20291 main.go:141] libmachine: (addons-649719) Calling .GetMachineName
	I1213 19:02:30.222469   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:02:30.222627   20291 start.go:159] libmachine.API.Create for "addons-649719" (driver="kvm2")
	I1213 19:02:30.222655   20291 client.go:168] LocalClient.Create starting
	I1213 19:02:30.222695   20291 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem
	I1213 19:02:30.561087   20291 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem
	I1213 19:02:30.714120   20291 main.go:141] libmachine: Running pre-create checks...
	I1213 19:02:30.714142   20291 main.go:141] libmachine: (addons-649719) Calling .PreCreateCheck
	I1213 19:02:30.714607   20291 main.go:141] libmachine: (addons-649719) Calling .GetConfigRaw
	I1213 19:02:30.715053   20291 main.go:141] libmachine: Creating machine...
	I1213 19:02:30.715078   20291 main.go:141] libmachine: (addons-649719) Calling .Create
	I1213 19:02:30.715269   20291 main.go:141] libmachine: (addons-649719) creating KVM machine...
	I1213 19:02:30.715287   20291 main.go:141] libmachine: (addons-649719) creating network...
	I1213 19:02:30.716552   20291 main.go:141] libmachine: (addons-649719) DBG | found existing default KVM network
	I1213 19:02:30.717212   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:30.717052   20314 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b80}
	I1213 19:02:30.717239   20291 main.go:141] libmachine: (addons-649719) DBG | created network xml: 
	I1213 19:02:30.717256   20291 main.go:141] libmachine: (addons-649719) DBG | <network>
	I1213 19:02:30.717264   20291 main.go:141] libmachine: (addons-649719) DBG |   <name>mk-addons-649719</name>
	I1213 19:02:30.717272   20291 main.go:141] libmachine: (addons-649719) DBG |   <dns enable='no'/>
	I1213 19:02:30.717278   20291 main.go:141] libmachine: (addons-649719) DBG |   
	I1213 19:02:30.717288   20291 main.go:141] libmachine: (addons-649719) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1213 19:02:30.717296   20291 main.go:141] libmachine: (addons-649719) DBG |     <dhcp>
	I1213 19:02:30.717305   20291 main.go:141] libmachine: (addons-649719) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1213 19:02:30.717311   20291 main.go:141] libmachine: (addons-649719) DBG |     </dhcp>
	I1213 19:02:30.717317   20291 main.go:141] libmachine: (addons-649719) DBG |   </ip>
	I1213 19:02:30.717325   20291 main.go:141] libmachine: (addons-649719) DBG |   
	I1213 19:02:30.717353   20291 main.go:141] libmachine: (addons-649719) DBG | </network>
	I1213 19:02:30.717373   20291 main.go:141] libmachine: (addons-649719) DBG | 
	I1213 19:02:30.722555   20291 main.go:141] libmachine: (addons-649719) DBG | trying to create private KVM network mk-addons-649719 192.168.39.0/24...
	I1213 19:02:30.787750   20291 main.go:141] libmachine: (addons-649719) DBG | private KVM network mk-addons-649719 192.168.39.0/24 created
	I1213 19:02:30.787786   20291 main.go:141] libmachine: (addons-649719) setting up store path in /home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719 ...
	I1213 19:02:30.787804   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:30.787711   20314 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 19:02:30.787889   20291 main.go:141] libmachine: (addons-649719) building disk image from file:///home/jenkins/minikube-integration/20090-12353/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso
	I1213 19:02:30.787984   20291 main.go:141] libmachine: (addons-649719) Downloading /home/jenkins/minikube-integration/20090-12353/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20090-12353/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso...
	I1213 19:02:31.060741   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:31.060641   20314 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa...
	I1213 19:02:31.322326   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:31.322172   20314 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/addons-649719.rawdisk...
	I1213 19:02:31.322365   20291 main.go:141] libmachine: (addons-649719) DBG | Writing magic tar header
	I1213 19:02:31.322405   20291 main.go:141] libmachine: (addons-649719) DBG | Writing SSH key tar header
	I1213 19:02:31.322443   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:31.322314   20314 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719 ...
	I1213 19:02:31.322475   20291 main.go:141] libmachine: (addons-649719) setting executable bit set on /home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719 (perms=drwx------)
	I1213 19:02:31.322497   20291 main.go:141] libmachine: (addons-649719) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719
	I1213 19:02:31.322508   20291 main.go:141] libmachine: (addons-649719) setting executable bit set on /home/jenkins/minikube-integration/20090-12353/.minikube/machines (perms=drwxr-xr-x)
	I1213 19:02:31.322519   20291 main.go:141] libmachine: (addons-649719) setting executable bit set on /home/jenkins/minikube-integration/20090-12353/.minikube (perms=drwxr-xr-x)
	I1213 19:02:31.322525   20291 main.go:141] libmachine: (addons-649719) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20090-12353/.minikube/machines
	I1213 19:02:31.322531   20291 main.go:141] libmachine: (addons-649719) setting executable bit set on /home/jenkins/minikube-integration/20090-12353 (perms=drwxrwxr-x)
	I1213 19:02:31.322541   20291 main.go:141] libmachine: (addons-649719) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1213 19:02:31.322554   20291 main.go:141] libmachine: (addons-649719) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1213 19:02:31.322566   20291 main.go:141] libmachine: (addons-649719) creating domain...
	I1213 19:02:31.322579   20291 main.go:141] libmachine: (addons-649719) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 19:02:31.322592   20291 main.go:141] libmachine: (addons-649719) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20090-12353
	I1213 19:02:31.322605   20291 main.go:141] libmachine: (addons-649719) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1213 19:02:31.322625   20291 main.go:141] libmachine: (addons-649719) DBG | checking permissions on dir: /home/jenkins
	I1213 19:02:31.322644   20291 main.go:141] libmachine: (addons-649719) DBG | checking permissions on dir: /home
	I1213 19:02:31.322656   20291 main.go:141] libmachine: (addons-649719) DBG | skipping /home - not owner
	I1213 19:02:31.323542   20291 main.go:141] libmachine: (addons-649719) define libvirt domain using xml: 
	I1213 19:02:31.323557   20291 main.go:141] libmachine: (addons-649719) <domain type='kvm'>
	I1213 19:02:31.323567   20291 main.go:141] libmachine: (addons-649719)   <name>addons-649719</name>
	I1213 19:02:31.323575   20291 main.go:141] libmachine: (addons-649719)   <memory unit='MiB'>4000</memory>
	I1213 19:02:31.323588   20291 main.go:141] libmachine: (addons-649719)   <vcpu>2</vcpu>
	I1213 19:02:31.323596   20291 main.go:141] libmachine: (addons-649719)   <features>
	I1213 19:02:31.323609   20291 main.go:141] libmachine: (addons-649719)     <acpi/>
	I1213 19:02:31.323619   20291 main.go:141] libmachine: (addons-649719)     <apic/>
	I1213 19:02:31.323629   20291 main.go:141] libmachine: (addons-649719)     <pae/>
	I1213 19:02:31.323645   20291 main.go:141] libmachine: (addons-649719)     
	I1213 19:02:31.323656   20291 main.go:141] libmachine: (addons-649719)   </features>
	I1213 19:02:31.323664   20291 main.go:141] libmachine: (addons-649719)   <cpu mode='host-passthrough'>
	I1213 19:02:31.323678   20291 main.go:141] libmachine: (addons-649719)   
	I1213 19:02:31.323691   20291 main.go:141] libmachine: (addons-649719)   </cpu>
	I1213 19:02:31.323721   20291 main.go:141] libmachine: (addons-649719)   <os>
	I1213 19:02:31.323741   20291 main.go:141] libmachine: (addons-649719)     <type>hvm</type>
	I1213 19:02:31.323748   20291 main.go:141] libmachine: (addons-649719)     <boot dev='cdrom'/>
	I1213 19:02:31.323758   20291 main.go:141] libmachine: (addons-649719)     <boot dev='hd'/>
	I1213 19:02:31.323783   20291 main.go:141] libmachine: (addons-649719)     <bootmenu enable='no'/>
	I1213 19:02:31.323802   20291 main.go:141] libmachine: (addons-649719)   </os>
	I1213 19:02:31.323827   20291 main.go:141] libmachine: (addons-649719)   <devices>
	I1213 19:02:31.323845   20291 main.go:141] libmachine: (addons-649719)     <disk type='file' device='cdrom'>
	I1213 19:02:31.323862   20291 main.go:141] libmachine: (addons-649719)       <source file='/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/boot2docker.iso'/>
	I1213 19:02:31.323874   20291 main.go:141] libmachine: (addons-649719)       <target dev='hdc' bus='scsi'/>
	I1213 19:02:31.323883   20291 main.go:141] libmachine: (addons-649719)       <readonly/>
	I1213 19:02:31.323893   20291 main.go:141] libmachine: (addons-649719)     </disk>
	I1213 19:02:31.323904   20291 main.go:141] libmachine: (addons-649719)     <disk type='file' device='disk'>
	I1213 19:02:31.323916   20291 main.go:141] libmachine: (addons-649719)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1213 19:02:31.323936   20291 main.go:141] libmachine: (addons-649719)       <source file='/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/addons-649719.rawdisk'/>
	I1213 19:02:31.323948   20291 main.go:141] libmachine: (addons-649719)       <target dev='hda' bus='virtio'/>
	I1213 19:02:31.323963   20291 main.go:141] libmachine: (addons-649719)     </disk>
	I1213 19:02:31.323975   20291 main.go:141] libmachine: (addons-649719)     <interface type='network'>
	I1213 19:02:31.323985   20291 main.go:141] libmachine: (addons-649719)       <source network='mk-addons-649719'/>
	I1213 19:02:31.323995   20291 main.go:141] libmachine: (addons-649719)       <model type='virtio'/>
	I1213 19:02:31.324002   20291 main.go:141] libmachine: (addons-649719)     </interface>
	I1213 19:02:31.324009   20291 main.go:141] libmachine: (addons-649719)     <interface type='network'>
	I1213 19:02:31.324018   20291 main.go:141] libmachine: (addons-649719)       <source network='default'/>
	I1213 19:02:31.324029   20291 main.go:141] libmachine: (addons-649719)       <model type='virtio'/>
	I1213 19:02:31.324037   20291 main.go:141] libmachine: (addons-649719)     </interface>
	I1213 19:02:31.324049   20291 main.go:141] libmachine: (addons-649719)     <serial type='pty'>
	I1213 19:02:31.324059   20291 main.go:141] libmachine: (addons-649719)       <target port='0'/>
	I1213 19:02:31.324069   20291 main.go:141] libmachine: (addons-649719)     </serial>
	I1213 19:02:31.324077   20291 main.go:141] libmachine: (addons-649719)     <console type='pty'>
	I1213 19:02:31.324088   20291 main.go:141] libmachine: (addons-649719)       <target type='serial' port='0'/>
	I1213 19:02:31.324100   20291 main.go:141] libmachine: (addons-649719)     </console>
	I1213 19:02:31.324108   20291 main.go:141] libmachine: (addons-649719)     <rng model='virtio'>
	I1213 19:02:31.324116   20291 main.go:141] libmachine: (addons-649719)       <backend model='random'>/dev/random</backend>
	I1213 19:02:31.324126   20291 main.go:141] libmachine: (addons-649719)     </rng>
	I1213 19:02:31.324137   20291 main.go:141] libmachine: (addons-649719)     
	I1213 19:02:31.324153   20291 main.go:141] libmachine: (addons-649719)     
	I1213 19:02:31.324165   20291 main.go:141] libmachine: (addons-649719)   </devices>
	I1213 19:02:31.324175   20291 main.go:141] libmachine: (addons-649719) </domain>
	I1213 19:02:31.324188   20291 main.go:141] libmachine: (addons-649719) 
	I1213 19:02:31.329771   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:d4:1e:3f in network default
	I1213 19:02:31.330300   20291 main.go:141] libmachine: (addons-649719) starting domain...
	I1213 19:02:31.330320   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:31.330329   20291 main.go:141] libmachine: (addons-649719) ensuring networks are active...
	I1213 19:02:31.330831   20291 main.go:141] libmachine: (addons-649719) Ensuring network default is active
	I1213 19:02:31.331169   20291 main.go:141] libmachine: (addons-649719) Ensuring network mk-addons-649719 is active
	I1213 19:02:31.331588   20291 main.go:141] libmachine: (addons-649719) getting domain XML...
	I1213 19:02:31.332204   20291 main.go:141] libmachine: (addons-649719) creating domain...
	I1213 19:02:32.698282   20291 main.go:141] libmachine: (addons-649719) waiting for IP...
	I1213 19:02:32.699058   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:32.699430   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:32.699458   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:32.699412   20314 retry.go:31] will retry after 308.894471ms: waiting for domain to come up
	I1213 19:02:33.010171   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:33.010580   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:33.010615   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:33.010562   20314 retry.go:31] will retry after 284.369707ms: waiting for domain to come up
	I1213 19:02:33.297096   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:33.297510   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:33.297537   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:33.297488   20314 retry.go:31] will retry after 455.385881ms: waiting for domain to come up
	I1213 19:02:33.754166   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:33.754611   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:33.754637   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:33.754589   20314 retry.go:31] will retry after 439.340553ms: waiting for domain to come up
	I1213 19:02:34.195082   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:34.195554   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:34.195582   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:34.195529   20314 retry.go:31] will retry after 703.177309ms: waiting for domain to come up
	I1213 19:02:34.900606   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:34.901022   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:34.901071   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:34.901020   20314 retry.go:31] will retry after 639.233467ms: waiting for domain to come up
	I1213 19:02:35.541503   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:35.541933   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:35.541975   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:35.541926   20314 retry.go:31] will retry after 782.355402ms: waiting for domain to come up
	I1213 19:02:36.325584   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:36.325967   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:36.325984   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:36.325950   20314 retry.go:31] will retry after 1.329458891s: waiting for domain to come up
	I1213 19:02:37.657408   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:37.657773   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:37.657803   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:37.657767   20314 retry.go:31] will retry after 1.321375468s: waiting for domain to come up
	I1213 19:02:38.981391   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:38.981764   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:38.981781   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:38.981746   20314 retry.go:31] will retry after 1.935955387s: waiting for domain to come up
	I1213 19:02:40.919661   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:40.920103   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:40.920161   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:40.920098   20314 retry.go:31] will retry after 2.67995961s: waiting for domain to come up
	I1213 19:02:43.601128   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:43.601583   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:43.601609   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:43.601554   20314 retry.go:31] will retry after 3.028482314s: waiting for domain to come up
	I1213 19:02:46.631981   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:46.632417   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:46.632441   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:46.632396   20314 retry.go:31] will retry after 3.308087766s: waiting for domain to come up
	I1213 19:02:49.943819   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:49.944141   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find current IP address of domain addons-649719 in network mk-addons-649719
	I1213 19:02:49.944158   20291 main.go:141] libmachine: (addons-649719) DBG | I1213 19:02:49.944119   20314 retry.go:31] will retry after 4.38190267s: waiting for domain to come up
	I1213 19:02:54.331030   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.331457   20291 main.go:141] libmachine: (addons-649719) found domain IP: 192.168.39.191
	I1213 19:02:54.331488   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has current primary IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.331496   20291 main.go:141] libmachine: (addons-649719) reserving static IP address...
	I1213 19:02:54.331789   20291 main.go:141] libmachine: (addons-649719) DBG | unable to find host DHCP lease matching {name: "addons-649719", mac: "52:54:00:9c:6b:aa", ip: "192.168.39.191"} in network mk-addons-649719
	I1213 19:02:54.398337   20291 main.go:141] libmachine: (addons-649719) reserved static IP address 192.168.39.191 for domain addons-649719
	I1213 19:02:54.398363   20291 main.go:141] libmachine: (addons-649719) waiting for SSH...
	I1213 19:02:54.398380   20291 main.go:141] libmachine: (addons-649719) DBG | Getting to WaitForSSH function...
	I1213 19:02:54.400646   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.400943   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:54.400968   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.401130   20291 main.go:141] libmachine: (addons-649719) DBG | Using SSH client type: external
	I1213 19:02:54.401158   20291 main.go:141] libmachine: (addons-649719) DBG | Using SSH private key: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa (-rw-------)
	I1213 19:02:54.401195   20291 main.go:141] libmachine: (addons-649719) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.191 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 19:02:54.401215   20291 main.go:141] libmachine: (addons-649719) DBG | About to run SSH command:
	I1213 19:02:54.401234   20291 main.go:141] libmachine: (addons-649719) DBG | exit 0
	I1213 19:02:54.530527   20291 main.go:141] libmachine: (addons-649719) DBG | SSH cmd err, output: <nil>: 
	I1213 19:02:54.530796   20291 main.go:141] libmachine: (addons-649719) KVM machine creation complete
	I1213 19:02:54.531070   20291 main.go:141] libmachine: (addons-649719) Calling .GetConfigRaw
	I1213 19:02:54.531608   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:02:54.531778   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:02:54.531900   20291 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1213 19:02:54.531915   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:02:54.533197   20291 main.go:141] libmachine: Detecting operating system of created instance...
	I1213 19:02:54.533211   20291 main.go:141] libmachine: Waiting for SSH to be available...
	I1213 19:02:54.533217   20291 main.go:141] libmachine: Getting to WaitForSSH function...
	I1213 19:02:54.533222   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:54.535534   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.535859   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:54.535884   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.536029   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:54.536211   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:54.536379   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:54.536506   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:54.536655   20291 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:54.536836   20291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1213 19:02:54.536848   20291 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1213 19:02:54.637685   20291 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:02:54.637716   20291 main.go:141] libmachine: Detecting the provisioner...
	I1213 19:02:54.637727   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:54.640358   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.640683   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:54.640711   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.640859   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:54.641027   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:54.641173   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:54.641309   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:54.641484   20291 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:54.641632   20291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1213 19:02:54.641642   20291 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1213 19:02:54.747123   20291 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1213 19:02:54.747202   20291 main.go:141] libmachine: found compatible host: buildroot
	I1213 19:02:54.747217   20291 main.go:141] libmachine: Provisioning with buildroot...
	I1213 19:02:54.747227   20291 main.go:141] libmachine: (addons-649719) Calling .GetMachineName
	I1213 19:02:54.747451   20291 buildroot.go:166] provisioning hostname "addons-649719"
	I1213 19:02:54.747478   20291 main.go:141] libmachine: (addons-649719) Calling .GetMachineName
	I1213 19:02:54.747675   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:54.750114   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.750509   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:54.750536   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.750715   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:54.750891   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:54.751032   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:54.751183   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:54.751327   20291 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:54.751472   20291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1213 19:02:54.751482   20291 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-649719 && echo "addons-649719" | sudo tee /etc/hostname
	I1213 19:02:54.867376   20291 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-649719
	
	I1213 19:02:54.867401   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:54.869855   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.870130   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:54.870155   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.870343   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:54.870506   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:54.870660   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:54.870805   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:54.870978   20291 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:54.871184   20291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1213 19:02:54.871203   20291 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-649719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-649719/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-649719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:02:54.983792   20291 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:02:54.983817   20291 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20090-12353/.minikube CaCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20090-12353/.minikube}
	I1213 19:02:54.983844   20291 buildroot.go:174] setting up certificates
	I1213 19:02:54.983854   20291 provision.go:84] configureAuth start
	I1213 19:02:54.983862   20291 main.go:141] libmachine: (addons-649719) Calling .GetMachineName
	I1213 19:02:54.984127   20291 main.go:141] libmachine: (addons-649719) Calling .GetIP
	I1213 19:02:54.986611   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.986907   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:54.986933   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.987046   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:54.989004   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.989331   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:54.989360   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:54.989428   20291 provision.go:143] copyHostCerts
	I1213 19:02:54.989533   20291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem (1675 bytes)
	I1213 19:02:54.989650   20291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem (1082 bytes)
	I1213 19:02:54.989706   20291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem (1123 bytes)
	I1213 19:02:54.989752   20291 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem org=jenkins.addons-649719 san=[127.0.0.1 192.168.39.191 addons-649719 localhost minikube]
	I1213 19:02:55.052653   20291 provision.go:177] copyRemoteCerts
	I1213 19:02:55.052703   20291 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:02:55.052724   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:55.054920   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.055200   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:55.055224   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.055421   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:55.055582   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:55.055708   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:55.055803   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:02:55.136334   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 19:02:55.158027   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 19:02:55.178943   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 19:02:55.199896   20291 provision.go:87] duration metric: took 216.031324ms to configureAuth
	I1213 19:02:55.199945   20291 buildroot.go:189] setting minikube options for container-runtime
	I1213 19:02:55.200111   20291 config.go:182] Loaded profile config "addons-649719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:02:55.200185   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:55.202574   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.202875   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:55.202902   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.203054   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:55.203221   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:55.203365   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:55.203526   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:55.203666   20291 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:55.203802   20291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1213 19:02:55.203815   20291 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:02:55.411750   20291 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:02:55.411782   20291 main.go:141] libmachine: Checking connection to Docker...
	I1213 19:02:55.411791   20291 main.go:141] libmachine: (addons-649719) Calling .GetURL
	I1213 19:02:55.413098   20291 main.go:141] libmachine: (addons-649719) DBG | using libvirt version 6000000
	I1213 19:02:55.415418   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.415759   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:55.415787   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.415957   20291 main.go:141] libmachine: Docker is up and running!
	I1213 19:02:55.415967   20291 main.go:141] libmachine: Reticulating splines...
	I1213 19:02:55.415973   20291 client.go:171] duration metric: took 25.193307341s to LocalClient.Create
	I1213 19:02:55.415994   20291 start.go:167] duration metric: took 25.193367401s to libmachine.API.Create "addons-649719"
	I1213 19:02:55.416007   20291 start.go:293] postStartSetup for "addons-649719" (driver="kvm2")
	I1213 19:02:55.416020   20291 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:02:55.416038   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:02:55.416259   20291 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:02:55.416284   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:55.418028   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.418282   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:55.418306   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.418416   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:55.418593   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:55.418735   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:55.418859   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:02:55.500168   20291 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:02:55.503708   20291 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 19:02:55.503726   20291 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-12353/.minikube/addons for local assets ...
	I1213 19:02:55.503781   20291 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-12353/.minikube/files for local assets ...
	I1213 19:02:55.503803   20291 start.go:296] duration metric: took 87.790722ms for postStartSetup
	I1213 19:02:55.503831   20291 main.go:141] libmachine: (addons-649719) Calling .GetConfigRaw
	I1213 19:02:55.504336   20291 main.go:141] libmachine: (addons-649719) Calling .GetIP
	I1213 19:02:55.506607   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.506971   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:55.507005   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.507242   20291 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/config.json ...
	I1213 19:02:55.507443   20291 start.go:128] duration metric: took 25.301449948s to createHost
	I1213 19:02:55.507465   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:55.509676   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.509992   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:55.510016   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.510148   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:55.510300   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:55.510459   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:55.510598   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:55.510718   20291 main.go:141] libmachine: Using SSH client type: native
	I1213 19:02:55.510900   20291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1213 19:02:55.510912   20291 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 19:02:55.614815   20291 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734116575.592233991
	
	I1213 19:02:55.614840   20291 fix.go:216] guest clock: 1734116575.592233991
	I1213 19:02:55.614900   20291 fix.go:229] Guest: 2024-12-13 19:02:55.592233991 +0000 UTC Remote: 2024-12-13 19:02:55.507455192 +0000 UTC m=+25.397340381 (delta=84.778799ms)
	I1213 19:02:55.614935   20291 fix.go:200] guest clock delta is within tolerance: 84.778799ms
	I1213 19:02:55.614940   20291 start.go:83] releasing machines lock for "addons-649719", held for 25.40902749s
	I1213 19:02:55.614965   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:02:55.615218   20291 main.go:141] libmachine: (addons-649719) Calling .GetIP
	I1213 19:02:55.617685   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.618009   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:55.618030   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.618152   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:02:55.618616   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:02:55.618763   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:02:55.618838   20291 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:02:55.618894   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:55.619009   20291 ssh_runner.go:195] Run: cat /version.json
	I1213 19:02:55.619034   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:02:55.621572   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.621774   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.621991   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:55.622012   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.622123   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:55.622246   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:55.622266   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:55.622285   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:55.622491   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:55.622507   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:02:55.622638   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:02:55.622649   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:02:55.622896   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:02:55.623019   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:02:55.730214   20291 ssh_runner.go:195] Run: systemctl --version
	I1213 19:02:55.735908   20291 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:02:55.887134   20291 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 19:02:55.893048   20291 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 19:02:55.893106   20291 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:02:55.907341   20291 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 19:02:55.907365   20291 start.go:495] detecting cgroup driver to use...
	I1213 19:02:55.907432   20291 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:02:55.921781   20291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:02:55.934253   20291 docker.go:217] disabling cri-docker service (if available) ...
	I1213 19:02:55.934301   20291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:02:55.946609   20291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:02:55.959054   20291 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:02:56.075739   20291 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:02:56.211389   20291 docker.go:233] disabling docker service ...
	I1213 19:02:56.211463   20291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:02:56.224909   20291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:02:56.236733   20291 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:02:56.368552   20291 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:02:56.500533   20291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:02:56.513226   20291 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:02:56.529786   20291 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1213 19:02:56.529851   20291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:56.539308   20291 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:02:56.539364   20291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:56.548827   20291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:56.558540   20291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:56.567956   20291 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:02:56.577771   20291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:56.587149   20291 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:56.602221   20291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:02:56.611777   20291 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:02:56.621794   20291 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 19:02:56.621835   20291 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 19:02:56.635123   20291 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:02:56.645184   20291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:02:56.782500   20291 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:02:56.871537   20291 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:02:56.871624   20291 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:02:56.875799   20291 start.go:563] Will wait 60s for crictl version
	I1213 19:02:56.875859   20291 ssh_runner.go:195] Run: which crictl
	I1213 19:02:56.879225   20291 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 19:02:56.916160   20291 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 19:02:56.916274   20291 ssh_runner.go:195] Run: crio --version
	I1213 19:02:56.941598   20291 ssh_runner.go:195] Run: crio --version
	I1213 19:02:56.969503   20291 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1213 19:02:56.970660   20291 main.go:141] libmachine: (addons-649719) Calling .GetIP
	I1213 19:02:56.973112   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:56.973407   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:02:56.973431   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:02:56.973610   20291 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 19:02:56.977269   20291 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:02:56.988821   20291 kubeadm.go:883] updating cluster {Name:addons-649719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-649719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 19:02:56.988912   20291 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:02:56.988952   20291 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:02:57.017866   20291 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1213 19:02:57.017924   20291 ssh_runner.go:195] Run: which lz4
	I1213 19:02:57.021534   20291 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 19:02:57.025150   20291 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 19:02:57.025176   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1213 19:02:58.107948   20291 crio.go:462] duration metric: took 1.086435606s to copy over tarball
	I1213 19:02:58.108016   20291 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 19:03:00.167046   20291 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.059000994s)
	I1213 19:03:00.167075   20291 crio.go:469] duration metric: took 2.059102811s to extract the tarball
	I1213 19:03:00.167084   20291 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 19:03:00.214289   20291 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:03:00.252257   20291 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:03:00.252279   20291 cache_images.go:84] Images are preloaded, skipping loading
	I1213 19:03:00.252286   20291 kubeadm.go:934] updating node { 192.168.39.191 8443 v1.31.2 crio true true} ...
	I1213 19:03:00.252380   20291 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-649719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-649719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:03:00.252457   20291 ssh_runner.go:195] Run: crio config
	I1213 19:03:00.295464   20291 cni.go:84] Creating CNI manager for ""
	I1213 19:03:00.295490   20291 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 19:03:00.295509   20291 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1213 19:03:00.295534   20291 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.191 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-649719 NodeName:addons-649719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.191 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 19:03:00.295683   20291 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.191
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-649719"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.191"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.191"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 19:03:00.295757   20291 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 19:03:00.305198   20291 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 19:03:00.305252   20291 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 19:03:00.313821   20291 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1213 19:03:00.328761   20291 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 19:03:00.343238   20291 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1213 19:03:00.357697   20291 ssh_runner.go:195] Run: grep 192.168.39.191	control-plane.minikube.internal$ /etc/hosts
	I1213 19:03:00.361041   20291 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.191	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:03:00.371780   20291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:03:00.487741   20291 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:03:00.504412   20291 certs.go:68] Setting up /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719 for IP: 192.168.39.191
	I1213 19:03:00.504442   20291 certs.go:194] generating shared ca certs ...
	I1213 19:03:00.504463   20291 certs.go:226] acquiring lock for ca certs: {Name:mka8994129240986519f4b0ac41f1e4e27ada985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:00.504626   20291 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key
	I1213 19:03:00.607732   20291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt ...
	I1213 19:03:00.607758   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt: {Name:mkbfe6eb30bb8ad75f44083b09196d4656fd8b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:00.608382   20291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key ...
	I1213 19:03:00.608398   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key: {Name:mk423e5e304b1945183e810a237f3c28213efcd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:00.608499   20291 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key
	I1213 19:03:00.724203   20291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.crt ...
	I1213 19:03:00.724228   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.crt: {Name:mk643f1f713df237848413aeec087dacce1c8826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:00.724384   20291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key ...
	I1213 19:03:00.724400   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key: {Name:mkb212e911818f44d31f4f50e68bf9bf8949fc38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:00.724487   20291 certs.go:256] generating profile certs ...
	I1213 19:03:00.724551   20291 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.key
	I1213 19:03:00.724566   20291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt with IP's: []
	I1213 19:03:00.901588   20291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt ...
	I1213 19:03:00.901615   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: {Name:mkcdd50e72c448911a91bb57ba2b3c72dc3c1456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:00.901784   20291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.key ...
	I1213 19:03:00.901813   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.key: {Name:mk4b17521c748ccfd051d1fa287b436fe3eaa077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:00.901903   20291 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.key.f75a0503
	I1213 19:03:00.901927   20291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.crt.f75a0503 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.191]
	I1213 19:03:00.959618   20291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.crt.f75a0503 ...
	I1213 19:03:00.959640   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.crt.f75a0503: {Name:mk10c115767c744fbf65f9973a5d604f0d575ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:00.959798   20291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.key.f75a0503 ...
	I1213 19:03:00.959814   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.key.f75a0503: {Name:mk639b04307cad2c5f86a67ddc271fae9f7f0db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:00.959899   20291 certs.go:381] copying /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.crt.f75a0503 -> /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.crt
	I1213 19:03:00.959989   20291 certs.go:385] copying /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.key.f75a0503 -> /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.key
	I1213 19:03:00.960061   20291 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/proxy-client.key
	I1213 19:03:00.960091   20291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/proxy-client.crt with IP's: []
	I1213 19:03:01.047370   20291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/proxy-client.crt ...
	I1213 19:03:01.047394   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/proxy-client.crt: {Name:mk92e6f8bbccbfa9955ed41e3b9a578eead1de7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:01.047554   20291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/proxy-client.key ...
	I1213 19:03:01.047570   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/proxy-client.key: {Name:mkbfc849d9075d20f333d3bfa98996df9a8ea9d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:01.047767   20291 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem (1679 bytes)
	I1213 19:03:01.047809   20291 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem (1082 bytes)
	I1213 19:03:01.047870   20291 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:03:01.047912   20291 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem (1675 bytes)
	I1213 19:03:01.048988   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:03:01.073670   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 19:03:01.094532   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:03:01.115260   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 19:03:01.135860   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 19:03:01.156677   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 19:03:01.190615   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:03:01.222785   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 19:03:01.244994   20291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:03:01.265476   20291 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 19:03:01.279792   20291 ssh_runner.go:195] Run: openssl version
	I1213 19:03:01.285146   20291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 19:03:01.294543   20291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:03:01.298539   20291 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:03:01.298584   20291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:03:01.303667   20291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 19:03:01.312872   20291 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:03:01.316422   20291 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 19:03:01.316467   20291 kubeadm.go:392] StartCluster: {Name:addons-649719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-649719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:03:01.316539   20291 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 19:03:01.316577   20291 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 19:03:01.349078   20291 cri.go:89] found id: ""
	I1213 19:03:01.349135   20291 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 19:03:01.358168   20291 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 19:03:01.367221   20291 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 19:03:01.375768   20291 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 19:03:01.375783   20291 kubeadm.go:157] found existing configuration files:
	
	I1213 19:03:01.375812   20291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 19:03:01.383897   20291 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 19:03:01.383947   20291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 19:03:01.392589   20291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 19:03:01.400709   20291 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 19:03:01.400749   20291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 19:03:01.409098   20291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 19:03:01.417065   20291 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 19:03:01.417103   20291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 19:03:01.425289   20291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 19:03:01.433078   20291 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 19:03:01.433123   20291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 19:03:01.441151   20291 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 19:03:01.582195   20291 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 19:03:11.707877   20291 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1213 19:03:11.707932   20291 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 19:03:11.707991   20291 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 19:03:11.708075   20291 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 19:03:11.708156   20291 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 19:03:11.708208   20291 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 19:03:11.709654   20291 out.go:235]   - Generating certificates and keys ...
	I1213 19:03:11.709714   20291 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 19:03:11.709786   20291 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 19:03:11.709860   20291 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 19:03:11.709912   20291 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1213 19:03:11.709963   20291 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1213 19:03:11.710006   20291 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1213 19:03:11.710050   20291 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1213 19:03:11.710148   20291 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-649719 localhost] and IPs [192.168.39.191 127.0.0.1 ::1]
	I1213 19:03:11.710204   20291 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1213 19:03:11.710322   20291 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-649719 localhost] and IPs [192.168.39.191 127.0.0.1 ::1]
	I1213 19:03:11.710380   20291 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 19:03:11.710437   20291 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 19:03:11.710504   20291 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1213 19:03:11.710592   20291 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 19:03:11.710674   20291 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 19:03:11.710783   20291 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 19:03:11.710836   20291 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 19:03:11.710917   20291 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 19:03:11.710970   20291 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 19:03:11.711044   20291 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 19:03:11.711108   20291 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 19:03:11.712259   20291 out.go:235]   - Booting up control plane ...
	I1213 19:03:11.712368   20291 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 19:03:11.712464   20291 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 19:03:11.712553   20291 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 19:03:11.712677   20291 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 19:03:11.712798   20291 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 19:03:11.712841   20291 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 19:03:11.712958   20291 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 19:03:11.713077   20291 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 19:03:11.713133   20291 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001092244s
	I1213 19:03:11.713193   20291 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1213 19:03:11.713242   20291 kubeadm.go:310] [api-check] The API server is healthy after 4.502177067s
	I1213 19:03:11.713328   20291 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 19:03:11.713458   20291 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 19:03:11.713525   20291 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 19:03:11.713695   20291 kubeadm.go:310] [mark-control-plane] Marking the node addons-649719 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 19:03:11.713743   20291 kubeadm.go:310] [bootstrap-token] Using token: fm4k4c.240oitggzttgdkur
	I1213 19:03:11.714993   20291 out.go:235]   - Configuring RBAC rules ...
	I1213 19:03:11.715098   20291 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 19:03:11.715191   20291 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 19:03:11.715347   20291 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 19:03:11.715457   20291 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 19:03:11.715559   20291 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 19:03:11.715645   20291 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 19:03:11.715767   20291 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 19:03:11.715818   20291 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1213 19:03:11.715883   20291 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1213 19:03:11.715890   20291 kubeadm.go:310] 
	I1213 19:03:11.715975   20291 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1213 19:03:11.715983   20291 kubeadm.go:310] 
	I1213 19:03:11.716051   20291 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1213 19:03:11.716062   20291 kubeadm.go:310] 
	I1213 19:03:11.716088   20291 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1213 19:03:11.716141   20291 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 19:03:11.716188   20291 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 19:03:11.716194   20291 kubeadm.go:310] 
	I1213 19:03:11.716244   20291 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1213 19:03:11.716252   20291 kubeadm.go:310] 
	I1213 19:03:11.716290   20291 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 19:03:11.716296   20291 kubeadm.go:310] 
	I1213 19:03:11.716342   20291 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1213 19:03:11.716404   20291 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 19:03:11.716479   20291 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 19:03:11.716488   20291 kubeadm.go:310] 
	I1213 19:03:11.716578   20291 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 19:03:11.716647   20291 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1213 19:03:11.716653   20291 kubeadm.go:310] 
	I1213 19:03:11.716724   20291 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fm4k4c.240oitggzttgdkur \
	I1213 19:03:11.716835   20291 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b927cc699f96ad11d9aa77520496913d5873f96a2e411ce1bcbe6def5a1747ad \
	I1213 19:03:11.716856   20291 kubeadm.go:310] 	--control-plane 
	I1213 19:03:11.716862   20291 kubeadm.go:310] 
	I1213 19:03:11.716930   20291 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1213 19:03:11.716942   20291 kubeadm.go:310] 
	I1213 19:03:11.717013   20291 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fm4k4c.240oitggzttgdkur \
	I1213 19:03:11.717176   20291 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b927cc699f96ad11d9aa77520496913d5873f96a2e411ce1bcbe6def5a1747ad 
	I1213 19:03:11.717196   20291 cni.go:84] Creating CNI manager for ""
	I1213 19:03:11.717209   20291 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 19:03:11.718463   20291 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 19:03:11.719496   20291 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 19:03:11.731103   20291 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 19:03:11.748875   20291 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 19:03:11.748961   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:11.749002   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-649719 minikube.k8s.io/updated_at=2024_12_13T19_03_11_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956 minikube.k8s.io/name=addons-649719 minikube.k8s.io/primary=true
	I1213 19:03:11.870977   20291 ops.go:34] apiserver oom_adj: -16
	I1213 19:03:11.871082   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:12.371950   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:12.872047   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:13.371422   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:13.871320   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:14.371786   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:14.872099   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:15.371485   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:15.872077   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:16.371337   20291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:03:16.511233   20291 kubeadm.go:1113] duration metric: took 4.762339966s to wait for elevateKubeSystemPrivileges
	I1213 19:03:16.511271   20291 kubeadm.go:394] duration metric: took 15.194808803s to StartCluster
	I1213 19:03:16.511298   20291 settings.go:142] acquiring lock: {Name:mkc90da34b53323b31b6e69f8fab5ad7b1bdb254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:16.511421   20291 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 19:03:16.511877   20291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/kubeconfig: {Name:mkeeacf16d2513309766df13b67a96dd252bc4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:03:16.512106   20291 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 19:03:16.512110   20291 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:03:16.512178   20291 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 19:03:16.512278   20291 addons.go:69] Setting yakd=true in profile "addons-649719"
	I1213 19:03:16.512300   20291 addons.go:234] Setting addon yakd=true in "addons-649719"
	I1213 19:03:16.512299   20291 addons.go:69] Setting ingress-dns=true in profile "addons-649719"
	I1213 19:03:16.512306   20291 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-649719"
	I1213 19:03:16.512326   20291 addons.go:234] Setting addon ingress-dns=true in "addons-649719"
	I1213 19:03:16.512331   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.512329   20291 addons.go:69] Setting registry=true in profile "addons-649719"
	I1213 19:03:16.512326   20291 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-649719"
	I1213 19:03:16.512350   20291 addons.go:234] Setting addon registry=true in "addons-649719"
	I1213 19:03:16.512356   20291 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-649719"
	I1213 19:03:16.512372   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.512375   20291 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-649719"
	I1213 19:03:16.512379   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.512386   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.512399   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.512773   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.512778   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.512794   20291 addons.go:69] Setting cloud-spanner=true in profile "addons-649719"
	I1213 19:03:16.512803   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.512802   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.512807   20291 addons.go:234] Setting addon cloud-spanner=true in "addons-649719"
	I1213 19:03:16.512814   20291 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-649719"
	I1213 19:03:16.512818   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.512831   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.512836   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.512846   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.512855   20291 addons.go:69] Setting volumesnapshots=true in profile "addons-649719"
	I1213 19:03:16.512868   20291 addons.go:234] Setting addon volumesnapshots=true in "addons-649719"
	I1213 19:03:16.512888   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.512903   20291 addons.go:69] Setting metrics-server=true in profile "addons-649719"
	I1213 19:03:16.512905   20291 config.go:182] Loaded profile config "addons-649719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:03:16.512919   20291 addons.go:234] Setting addon metrics-server=true in "addons-649719"
	I1213 19:03:16.512947   20291 addons.go:69] Setting gcp-auth=true in profile "addons-649719"
	I1213 19:03:16.512957   20291 addons.go:69] Setting inspektor-gadget=true in profile "addons-649719"
	I1213 19:03:16.512962   20291 mustload.go:65] Loading cluster: addons-649719
	I1213 19:03:16.512970   20291 addons.go:234] Setting addon inspektor-gadget=true in "addons-649719"
	I1213 19:03:16.512989   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.513056   20291 addons.go:69] Setting ingress=true in profile "addons-649719"
	I1213 19:03:16.513069   20291 addons.go:234] Setting addon ingress=true in "addons-649719"
	I1213 19:03:16.513097   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.513126   20291 config.go:182] Loaded profile config "addons-649719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:03:16.513168   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.513194   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.513278   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.513305   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.513368   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.513394   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.513461   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.513493   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.513512   20291 addons.go:69] Setting default-storageclass=true in profile "addons-649719"
	I1213 19:03:16.513558   20291 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-649719"
	I1213 19:03:16.512948   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.512846   20291 addons.go:69] Setting volcano=true in profile "addons-649719"
	I1213 19:03:16.513730   20291 addons.go:234] Setting addon volcano=true in "addons-649719"
	I1213 19:03:16.513759   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.512806   20291 addons.go:69] Setting storage-provisioner=true in profile "addons-649719"
	I1213 19:03:16.513781   20291 addons.go:234] Setting addon storage-provisioner=true in "addons-649719"
	I1213 19:03:16.513805   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.513926   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.513952   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.514019   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.514043   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.512838   20291 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-649719"
	I1213 19:03:16.514130   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.514149   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.514150   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.514176   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.512806   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.514787   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.513500   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.514796   20291 out.go:177] * Verifying Kubernetes components...
	I1213 19:03:16.513528   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.514269   20291 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-649719"
	I1213 19:03:16.515241   20291 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-649719"
	I1213 19:03:16.515270   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.516383   20291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:03:16.514291   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.514484   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.520829   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.534492   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I1213 19:03:16.534665   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I1213 19:03:16.534879   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46793
	I1213 19:03:16.534918   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37205
	I1213 19:03:16.535236   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.535335   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.535782   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.535801   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.535820   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.535858   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.535884   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.536463   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.536480   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.536549   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.536842   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.536898   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.537452   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.537488   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.538364   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.538385   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.538801   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.538839   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.540276   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.540423   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.540446   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.540899   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.541475   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.541519   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.542581   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.542623   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.562584   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I1213 19:03:16.563123   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.563721   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.563750   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.564131   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.564314   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.564748   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I1213 19:03:16.565135   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.565355   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43271
	I1213 19:03:16.565878   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.565898   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.565914   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.566298   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.566439   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.566459   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.567080   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.567119   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.567349   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.567897   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.567947   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.569506   20291 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-649719"
	I1213 19:03:16.569552   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.569917   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.569958   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.570305   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37787
	I1213 19:03:16.570810   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.571380   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.571400   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.571794   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.572059   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.582926   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44017
	I1213 19:03:16.582947   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32825
	I1213 19:03:16.582956   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I1213 19:03:16.582972   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.582927   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1213 19:03:16.583379   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.583421   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.583890   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.583981   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.584034   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.584088   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.585384   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.585407   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.585392   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.585468   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.585496   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.585516   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.586071   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.586080   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.586128   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.586202   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36959
	I1213 19:03:16.586312   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.586311   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.586786   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.586817   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.587295   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.587668   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.587690   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.587750   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.587856   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.588045   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.588549   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.588584   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.589908   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
	I1213 19:03:16.589931   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.589985   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
	I1213 19:03:16.590213   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.590751   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.590766   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.590824   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.591386   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.591417   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.591749   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.591885   20291 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1213 19:03:16.592542   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.593188   20291 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 19:03:16.593206   20291 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 19:03:16.593229   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.593316   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.594051   20291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 19:03:16.595140   20291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 19:03:16.595523   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42467
	I1213 19:03:16.596304   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.596827   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.596845   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.597032   20291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 19:03:16.597831   20291 addons.go:234] Setting addon default-storageclass=true in "addons-649719"
	I1213 19:03:16.597872   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:16.598216   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.598249   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.598461   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.599156   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.599236   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.599281   20291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 19:03:16.599612   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.599632   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.600258   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.600430   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.600577   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.600794   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.601209   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.601301   20291 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 19:03:16.602401   20291 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1213 19:03:16.603419   20291 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 19:03:16.603525   20291 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1213 19:03:16.603550   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 19:03:16.603573   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.605528   20291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 19:03:16.606098   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43613
	I1213 19:03:16.606462   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.606896   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.606920   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.606960   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.607297   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.607467   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.607542   20291 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 19:03:16.607792   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.607817   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.607973   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.608114   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.608217   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.608286   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.608497   20291 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 19:03:16.608515   20291 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 19:03:16.608534   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.609497   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45597
	I1213 19:03:16.609841   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.610259   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.610289   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.610613   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.611176   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.611226   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.611292   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.611622   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.611691   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.611708   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.611876   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.611910   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.612058   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.612092   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.612253   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.612374   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.612488   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.612616   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.612655   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.616895   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37693
	I1213 19:03:16.617404   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.617909   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.617927   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.618273   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.618798   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.618831   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.618924   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36685
	I1213 19:03:16.619648   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.620135   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.620159   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.620523   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.620691   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.622208   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.623896   20291 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1213 19:03:16.624966   20291 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1213 19:03:16.624985   20291 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1213 19:03:16.625004   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.627984   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40501
	I1213 19:03:16.628163   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.628491   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.628489   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.628572   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.628606   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.628738   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.628878   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.628997   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.629365   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.629378   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.629782   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.629930   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.631428   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.632943   20291 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 19:03:16.634668   20291 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 19:03:16.634686   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 19:03:16.634703   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.636009   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I1213 19:03:16.636565   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.637047   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44635
	I1213 19:03:16.637538   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.637554   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.637963   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.637977   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.638094   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.638116   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.638556   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.638590   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.638826   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.638887   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.638826   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38563
	I1213 19:03:16.639452   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.639528   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.639546   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.639530   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.640409   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.640463   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.640483   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.640499   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.640681   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.640929   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.640978   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.641352   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40717
	I1213 19:03:16.641985   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.642027   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.642292   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.642801   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.642816   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.643211   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.643350   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.643412   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.643576   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38089
	I1213 19:03:16.643958   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.644640   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.644662   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.644935   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.645142   20291 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 19:03:16.645153   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.645759   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.646239   20291 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 19:03:16.646254   20291 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 19:03:16.646271   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.647488   20291 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1213 19:03:16.648123   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.648479   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33483
	I1213 19:03:16.649137   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.649222   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.649507   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.649526   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.649748   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.649770   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.649891   20291 out.go:177]   - Using image docker.io/registry:2.8.3
	I1213 19:03:16.650009   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.650242   20291 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1213 19:03:16.650342   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.650365   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.650746   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.650835   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.650967   20291 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 19:03:16.650977   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 19:03:16.650991   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.651572   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.652009   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42843
	I1213 19:03:16.652432   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.652905   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.652922   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.653278   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.653432   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.653548   20291 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1213 19:03:16.654148   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.654524   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.654975   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.655013   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.655243   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.655481   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.655637   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.655685   20291 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1213 19:03:16.655727   20291 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1213 19:03:16.655780   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.656032   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.656901   20291 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 19:03:16.656924   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 19:03:16.657092   20291 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 19:03:16.657109   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 19:03:16.657123   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.657169   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.657306   20291 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1213 19:03:16.657510   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34803
	I1213 19:03:16.658186   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.659162   20291 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 19:03:16.659181   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1213 19:03:16.659198   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.659354   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.659371   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.660208   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.661272   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:16.661316   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:16.661605   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.662495   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.662596   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36487
	I1213 19:03:16.662888   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I1213 19:03:16.663273   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.663495   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.663515   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.663673   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.663683   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.663738   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.663752   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.663779   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.663967   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.664010   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.664072   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.664115   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.664250   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.664261   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.664340   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.664425   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.664762   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.664785   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.665779   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.665805   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.665968   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.666154   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.666292   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.666450   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.666457   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.666532   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.666746   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:16.666758   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:16.666973   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:16.666984   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:16.666991   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:16.666997   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:16.667146   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:16.667159   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:16.667161   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	W1213 19:03:16.667229   20291 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1213 19:03:16.667879   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.667895   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.673231   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.673419   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.675118   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.675525   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39165
	I1213 19:03:16.675824   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.676139   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.676151   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.676363   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.676443   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.676730   20291 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1213 19:03:16.677409   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38779
	I1213 19:03:16.677795   20291 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 19:03:16.677812   20291 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 19:03:16.677830   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.677849   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.677931   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.678265   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.678276   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.678600   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.678948   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.679127   20291 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 19:03:16.680526   20291 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 19:03:16.680546   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 19:03:16.680561   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.681039   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.681459   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.681895   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.681925   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.682254   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.682388   20291 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 19:03:16.682396   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.682543   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.682770   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	W1213 19:03:16.683488   20291 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54236->192.168.39.191:22: read: connection reset by peer
	I1213 19:03:16.683508   20291 retry.go:31] will retry after 127.895773ms: ssh: handshake failed: read tcp 192.168.39.1:54236->192.168.39.191:22: read: connection reset by peer
	I1213 19:03:16.683699   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40769
	I1213 19:03:16.683795   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.684037   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:16.684098   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.684107   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.684383   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.684498   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.684625   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:16.684636   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:16.684647   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.684730   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.684917   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:16.685056   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:16.686186   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:16.686380   20291 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 19:03:16.686393   20291 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 19:03:16.686409   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.688432   20291 out.go:177]   - Using image docker.io/busybox:stable
	I1213 19:03:16.688824   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.689204   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.689234   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.689372   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.689498   20291 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 19:03:16.689513   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 19:03:16.689528   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:16.689532   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.689669   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.689814   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:16.692254   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.692639   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:16.692686   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:16.692887   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:16.693086   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:16.693192   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:16.693282   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	W1213 19:03:16.694320   20291 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54270->192.168.39.191:22: read: connection reset by peer
	I1213 19:03:16.694338   20291 retry.go:31] will retry after 279.431936ms: ssh: handshake failed: read tcp 192.168.39.1:54270->192.168.39.191:22: read: connection reset by peer
	I1213 19:03:16.896284   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 19:03:16.962822   20291 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:03:16.967415   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 19:03:16.993018   20291 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 19:03:17.096990   20291 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 19:03:17.097020   20291 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 19:03:17.109501   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 19:03:17.162145   20291 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 19:03:17.162176   20291 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 19:03:17.185774   20291 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 19:03:17.185802   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1213 19:03:17.207040   20291 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 19:03:17.207061   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 19:03:17.218756   20291 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 19:03:17.218772   20291 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 19:03:17.230826   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 19:03:17.252101   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 19:03:17.266091   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 19:03:17.293625   20291 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 19:03:17.293647   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 19:03:17.296471   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 19:03:17.309401   20291 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 19:03:17.309432   20291 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 19:03:17.358425   20291 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 19:03:17.358453   20291 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 19:03:17.422885   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 19:03:17.434841   20291 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 19:03:17.434875   20291 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 19:03:17.471377   20291 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 19:03:17.471400   20291 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 19:03:17.489584   20291 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 19:03:17.489607   20291 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 19:03:17.497937   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 19:03:17.569936   20291 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 19:03:17.569969   20291 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 19:03:17.579596   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 19:03:17.579862   20291 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 19:03:17.579879   20291 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 19:03:17.606224   20291 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 19:03:17.606250   20291 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 19:03:17.676448   20291 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 19:03:17.676474   20291 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 19:03:17.716191   20291 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 19:03:17.716213   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 19:03:17.778820   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 19:03:17.805163   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 19:03:17.870233   20291 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 19:03:17.870265   20291 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 19:03:17.918471   20291 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 19:03:17.918499   20291 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1213 19:03:18.176904   20291 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 19:03:18.176936   20291 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 19:03:18.178095   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.281776665s)
	I1213 19:03:18.178132   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:18.178143   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:18.178148   20291 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.215289586s)
	I1213 19:03:18.178430   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:18.178445   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:18.178455   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:18.178459   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:18.178463   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:18.179123   20291 node_ready.go:35] waiting up to 6m0s for node "addons-649719" to be "Ready" ...
	I1213 19:03:18.179388   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:18.179390   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:18.179407   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:18.196890   20291 node_ready.go:49] node "addons-649719" has status "Ready":"True"
	I1213 19:03:18.196914   20291 node_ready.go:38] duration metric: took 17.750184ms for node "addons-649719" to be "Ready" ...
	I1213 19:03:18.196926   20291 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 19:03:18.210263   20291 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 19:03:18.210295   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 19:03:18.210632   20291 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jq5cx" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:18.306648   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.339199811s)
	I1213 19:03:18.306745   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:18.306760   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:18.307085   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:18.307106   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:18.307115   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:18.307122   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:18.307146   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:18.307356   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:18.307375   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:18.307387   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:18.320357   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:18.320373   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:18.320670   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:18.320689   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:18.320675   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:18.366139   20291 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 19:03:18.366164   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 19:03:18.519724   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 19:03:18.656604   20291 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 19:03:18.656631   20291 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 19:03:18.843458   20291 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 19:03:18.843490   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 19:03:18.969355   20291 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 19:03:18.969379   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 19:03:19.005779   20291 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.012727526s)
	I1213 19:03:19.005815   20291 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1213 19:03:19.277142   20291 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 19:03:19.277165   20291 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1213 19:03:19.488293   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 19:03:19.536872   20291 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-649719" context rescaled to 1 replicas
	I1213 19:03:20.281103   20291 pod_ready.go:103] pod "coredns-7c65d6cfc9-jq5cx" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:22.754990   20291 pod_ready.go:103] pod "coredns-7c65d6cfc9-jq5cx" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:23.619081   20291 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 19:03:23.619129   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:23.622085   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:23.622554   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:23.622585   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:23.622734   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:23.622977   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:23.623151   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:23.623307   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:24.033040   20291 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 19:03:24.113430   20291 addons.go:234] Setting addon gcp-auth=true in "addons-649719"
	I1213 19:03:24.113488   20291 host.go:66] Checking if "addons-649719" exists ...
	I1213 19:03:24.113780   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:24.113825   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:24.129361   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38787
	I1213 19:03:24.129747   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:24.130240   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:24.130259   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:24.130613   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:24.131167   20291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:03:24.131205   20291 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:03:24.146199   20291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35019
	I1213 19:03:24.147227   20291 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:03:24.147785   20291 main.go:141] libmachine: Using API Version  1
	I1213 19:03:24.147808   20291 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:03:24.148121   20291 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:03:24.148323   20291 main.go:141] libmachine: (addons-649719) Calling .GetState
	I1213 19:03:24.150000   20291 main.go:141] libmachine: (addons-649719) Calling .DriverName
	I1213 19:03:24.150251   20291 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 19:03:24.150271   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHHostname
	I1213 19:03:24.153257   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:24.153741   20291 main.go:141] libmachine: (addons-649719) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:6b:aa", ip: ""} in network mk-addons-649719: {Iface:virbr1 ExpiryTime:2024-12-13 20:02:45 +0000 UTC Type:0 Mac:52:54:00:9c:6b:aa Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-649719 Clientid:01:52:54:00:9c:6b:aa}
	I1213 19:03:24.153765   20291 main.go:141] libmachine: (addons-649719) DBG | domain addons-649719 has defined IP address 192.168.39.191 and MAC address 52:54:00:9c:6b:aa in network mk-addons-649719
	I1213 19:03:24.153964   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHPort
	I1213 19:03:24.154133   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHKeyPath
	I1213 19:03:24.154280   20291 main.go:141] libmachine: (addons-649719) Calling .GetSSHUsername
	I1213 19:03:24.154454   20291 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/addons-649719/id_rsa Username:docker}
	I1213 19:03:24.414218   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.304680254s)
	I1213 19:03:24.414269   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414279   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.414341   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.183483555s)
	I1213 19:03:24.414383   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.162259123s)
	I1213 19:03:24.414402   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414383   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414427   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.414412   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.414475   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.148353579s)
	I1213 19:03:24.414507   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.118019181s)
	I1213 19:03:24.414519   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414526   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.414532   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414531   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.414543   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.414549   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.414558   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414567   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.414631   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.991724351s)
	I1213 19:03:24.414660   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414669   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.414733   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.916772943s)
	I1213 19:03:24.414747   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414755   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.414833   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.835213799s)
	I1213 19:03:24.414867   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414877   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.414971   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.636118841s)
	I1213 19:03:24.414990   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.414998   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.415070   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.609880098s)
	I1213 19:03:24.415086   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.415096   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.415220   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.895466762s)
	W1213 19:03:24.415250   20291 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 19:03:24.415283   20291 retry.go:31] will retry after 156.830153ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 19:03:24.415427   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.415464   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.415472   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.415481   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.415488   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.415537   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.415558   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.415564   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.415571   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.415578   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.415620   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.415639   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.415646   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.415653   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.415659   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.415696   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.415714   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.415721   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.415728   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.415734   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.415769   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.415786   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.415795   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.415802   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.415809   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.415843   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.415860   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.415866   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.415875   20291 addons.go:475] Verifying addon ingress=true in "addons-649719"
	I1213 19:03:24.416098   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.416131   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.416138   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.416145   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.416151   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.416588   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.416601   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.416611   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.416619   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.416845   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.416880   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.416888   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.417017   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.417040   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.417055   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.417063   20291 addons.go:475] Verifying addon metrics-server=true in "addons-649719"
	I1213 19:03:24.417161   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.417199   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.417207   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.417362   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.417389   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.417396   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.417405   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.417413   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.417460   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.417478   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.417485   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.417492   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.417499   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.417940   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.417969   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.417977   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.418087   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.418108   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.418114   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.419107   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.419126   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.419149   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.419155   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.419290   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.419323   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.419330   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.419349   20291 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-649719 service yakd-dashboard -n yakd-dashboard
	
	I1213 19:03:24.419999   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.420008   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.420607   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:24.420635   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.420642   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.420650   20291 addons.go:475] Verifying addon registry=true in "addons-649719"
	I1213 19:03:24.422211   20291 out.go:177] * Verifying ingress addon...
	I1213 19:03:24.422211   20291 out.go:177] * Verifying registry addon...
	I1213 19:03:24.424760   20291 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 19:03:24.424826   20291 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 19:03:24.439180   20291 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 19:03:24.439202   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:24.439382   20291 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 19:03:24.439395   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:24.463333   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:24.463357   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:24.463666   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:24.463685   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:24.573083   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 19:03:24.929709   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:24.931308   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:25.231691   20291 pod_ready.go:103] pod "coredns-7c65d6cfc9-jq5cx" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:25.434089   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:25.434339   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:25.843488   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.355137443s)
	I1213 19:03:25.843541   20291 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.693270538s)
	I1213 19:03:25.843545   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:25.843563   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:25.843824   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:25.843886   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:25.843900   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:25.843915   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:25.843922   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:25.844227   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:25.844245   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:25.844245   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:25.844256   20291 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-649719"
	I1213 19:03:25.845231   20291 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1213 19:03:25.846238   20291 out.go:177] * Verifying csi-hostpath-driver addon...
	I1213 19:03:25.847822   20291 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 19:03:25.848555   20291 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1213 19:03:25.849170   20291 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 19:03:25.849193   20291 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 19:03:25.865655   20291 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 19:03:25.865677   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:25.931958   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:25.932601   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:25.946794   20291 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 19:03:25.946829   20291 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 19:03:26.075425   20291 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 19:03:26.075453   20291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 19:03:26.142029   20291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 19:03:26.353328   20291 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 19:03:26.353357   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:26.429245   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:26.429436   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:26.546253   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.97312431s)
	I1213 19:03:26.546306   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:26.546323   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:26.546600   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:26.546622   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:26.546632   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:26.546639   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:26.548091   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:26.548120   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:26.548133   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:26.853114   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:26.969034   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:26.969476   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:27.195426   20291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.053356237s)
	I1213 19:03:27.195468   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:27.195478   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:27.195740   20291 main.go:141] libmachine: (addons-649719) DBG | Closing plugin on server side
	I1213 19:03:27.195764   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:27.195810   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:27.195825   20291 main.go:141] libmachine: Making call to close driver server
	I1213 19:03:27.195832   20291 main.go:141] libmachine: (addons-649719) Calling .Close
	I1213 19:03:27.196047   20291 main.go:141] libmachine: Successfully made call to close driver server
	I1213 19:03:27.196063   20291 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 19:03:27.197306   20291 addons.go:475] Verifying addon gcp-auth=true in "addons-649719"
	I1213 19:03:27.198972   20291 out.go:177] * Verifying gcp-auth addon...
	I1213 19:03:27.201201   20291 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 19:03:27.206570   20291 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 19:03:27.206587   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:27.244461   20291 pod_ready.go:103] pod "coredns-7c65d6cfc9-jq5cx" in "kube-system" namespace has status "Ready":"False"
	I1213 19:03:27.367117   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:27.436882   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:27.437100   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:27.705847   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:27.853159   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:27.929613   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:27.929767   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:28.204661   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:28.218200   20291 pod_ready.go:98] pod "coredns-7c65d6cfc9-jq5cx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:27 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:16 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:16 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:16 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:16 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.191 HostIPs:[{IP:192.168.39
.191}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-12-13 19:03:16 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-12-13 19:03:21 +0000 UTC,FinishedAt:2024-12-13 19:03:27 +0000 UTC,ContainerID:cri-o://a0665184c73066d9aea83dc1ca6c748e434eb138f6cdd6123bdb9244889eb306,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://a0665184c73066d9aea83dc1ca6c748e434eb138f6cdd6123bdb9244889eb306 Started:0xc001eb6a40 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001d346a0} {Name:kube-api-access-w69mj MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001d346b0}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1213 19:03:28.218231   20291 pod_ready.go:82] duration metric: took 10.007571683s for pod "coredns-7c65d6cfc9-jq5cx" in "kube-system" namespace to be "Ready" ...
	E1213 19:03:28.218242   20291 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-jq5cx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:27 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:16 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:16 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:16 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-13 19:03:16 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.191 HostIPs:[{IP:192.168.39.191}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-12-13 19:03:16 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-12-13 19:03:21 +0000 UTC,FinishedAt:2024-12-13 19:03:27 +0000 UTC,ContainerID:cri-o://a0665184c73066d9aea83dc1ca6c748e434eb138f6cdd6123bdb9244889eb306,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://a0665184c73066d9aea83dc1ca6c748e434eb138f6cdd6123bdb9244889eb306 Started:0xc001eb6a40 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001d346a0} {Name:kube-api-access-w69mj MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001d346b0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1213 19:03:28.218253   20291 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-w7p7w" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.226527   20291 pod_ready.go:93] pod "coredns-7c65d6cfc9-w7p7w" in "kube-system" namespace has status "Ready":"True"
	I1213 19:03:28.226554   20291 pod_ready.go:82] duration metric: took 8.29183ms for pod "coredns-7c65d6cfc9-w7p7w" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.226568   20291 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-649719" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.239584   20291 pod_ready.go:93] pod "etcd-addons-649719" in "kube-system" namespace has status "Ready":"True"
	I1213 19:03:28.239608   20291 pod_ready.go:82] duration metric: took 13.032083ms for pod "etcd-addons-649719" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.239619   20291 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-649719" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.246836   20291 pod_ready.go:93] pod "kube-apiserver-addons-649719" in "kube-system" namespace has status "Ready":"True"
	I1213 19:03:28.246873   20291 pod_ready.go:82] duration metric: took 7.245365ms for pod "kube-apiserver-addons-649719" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.246886   20291 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-649719" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.252300   20291 pod_ready.go:93] pod "kube-controller-manager-addons-649719" in "kube-system" namespace has status "Ready":"True"
	I1213 19:03:28.252327   20291 pod_ready.go:82] duration metric: took 5.433009ms for pod "kube-controller-manager-addons-649719" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.252342   20291 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zhqf7" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.355877   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:28.429537   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:28.431706   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:28.614723   20291 pod_ready.go:93] pod "kube-proxy-zhqf7" in "kube-system" namespace has status "Ready":"True"
	I1213 19:03:28.614745   20291 pod_ready.go:82] duration metric: took 362.396016ms for pod "kube-proxy-zhqf7" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.614753   20291 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-649719" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:28.704774   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:28.852912   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:28.929233   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:28.929880   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:29.014770   20291 pod_ready.go:93] pod "kube-scheduler-addons-649719" in "kube-system" namespace has status "Ready":"True"
	I1213 19:03:29.014800   20291 pod_ready.go:82] duration metric: took 400.038737ms for pod "kube-scheduler-addons-649719" in "kube-system" namespace to be "Ready" ...
	I1213 19:03:29.014810   20291 pod_ready.go:39] duration metric: took 10.81787256s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 19:03:29.014826   20291 api_server.go:52] waiting for apiserver process to appear ...
	I1213 19:03:29.014904   20291 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:03:29.060144   20291 api_server.go:72] duration metric: took 12.548000761s to wait for apiserver process to appear ...
	I1213 19:03:29.060171   20291 api_server.go:88] waiting for apiserver healthz status ...
	I1213 19:03:29.060195   20291 api_server.go:253] Checking apiserver healthz at https://192.168.39.191:8443/healthz ...
	I1213 19:03:29.064866   20291 api_server.go:279] https://192.168.39.191:8443/healthz returned 200:
	ok
	I1213 19:03:29.065804   20291 api_server.go:141] control plane version: v1.31.2
	I1213 19:03:29.065824   20291 api_server.go:131] duration metric: took 5.64588ms to wait for apiserver health ...
	I1213 19:03:29.065832   20291 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 19:03:29.205325   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:29.219040   20291 system_pods.go:59] 18 kube-system pods found
	I1213 19:03:29.219073   20291 system_pods.go:61] "amd-gpu-device-plugin-pwrjv" [8cd61049-3892-4422-bb65-27b37c47bafb] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 19:03:29.219079   20291 system_pods.go:61] "coredns-7c65d6cfc9-w7p7w" [7ff9e37e-de38-4caa-b342-bd85b02357c1] Running
	I1213 19:03:29.219086   20291 system_pods.go:61] "csi-hostpath-attacher-0" [1fbc15fc-5d42-41f9-8790-47e42f716cc5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 19:03:29.219092   20291 system_pods.go:61] "csi-hostpath-resizer-0" [9331abab-a969-497c-a8ee-a6eb8d49d647] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 19:03:29.219100   20291 system_pods.go:61] "csi-hostpathplugin-zrvnk" [3e44db57-e7a0-4ad7-846c-6f034b87d938] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 19:03:29.219106   20291 system_pods.go:61] "etcd-addons-649719" [c50e5927-a88a-4246-9cac-d92cd80c8dc4] Running
	I1213 19:03:29.219109   20291 system_pods.go:61] "kube-apiserver-addons-649719" [a0d02add-130d-4c4b-9785-d22944023899] Running
	I1213 19:03:29.219113   20291 system_pods.go:61] "kube-controller-manager-addons-649719" [0f06f930-787a-4b89-9d21-62047d0ff6c9] Running
	I1213 19:03:29.219119   20291 system_pods.go:61] "kube-ingress-dns-minikube" [e406783b-1c28-4447-81fd-72cb0ef3b306] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 19:03:29.219123   20291 system_pods.go:61] "kube-proxy-zhqf7" [17cc9d6e-fee4-451f-a0d8-91ebf081f894] Running
	I1213 19:03:29.219127   20291 system_pods.go:61] "kube-scheduler-addons-649719" [d43e44ed-30af-4612-a992-3added273b60] Running
	I1213 19:03:29.219131   20291 system_pods.go:61] "metrics-server-84c5f94fbc-m8bmq" [19020284-7a06-4b3e-af82-964b038c6aea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 19:03:29.219138   20291 system_pods.go:61] "nvidia-device-plugin-daemonset-7scc7" [9ac38625-793e-41f6-85f0-ceb6f87c9f02] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 19:03:29.219147   20291 system_pods.go:61] "registry-5cc95cd69-pj78t" [ce97be6a-8047-4747-a0f2-aa19bd1ffd4e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 19:03:29.219152   20291 system_pods.go:61] "registry-proxy-q8msp" [831a22d5-3f2d-460b-a739-1e316400aebc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 19:03:29.219160   20291 system_pods.go:61] "snapshot-controller-56fcc65765-qddnd" [f8de8150-1a12-4a3a-9e2f-19b427174422] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 19:03:29.219165   20291 system_pods.go:61] "snapshot-controller-56fcc65765-zchf9" [d9385680-6ee6-4cd9-ab58-c0ab8290ac77] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 19:03:29.219169   20291 system_pods.go:61] "storage-provisioner" [bfe88593-e74e-4b8a-841d-81f2488dc9b4] Running
	I1213 19:03:29.219175   20291 system_pods.go:74] duration metric: took 153.338369ms to wait for pod list to return data ...
	I1213 19:03:29.219184   20291 default_sa.go:34] waiting for default service account to be created ...
	I1213 19:03:29.352719   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:29.414680   20291 default_sa.go:45] found service account: "default"
	I1213 19:03:29.414702   20291 default_sa.go:55] duration metric: took 195.512097ms for default service account to be created ...
	I1213 19:03:29.414710   20291 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 19:03:29.431017   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:29.431610   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:29.621084   20291 system_pods.go:86] 18 kube-system pods found
	I1213 19:03:29.621117   20291 system_pods.go:89] "amd-gpu-device-plugin-pwrjv" [8cd61049-3892-4422-bb65-27b37c47bafb] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 19:03:29.621124   20291 system_pods.go:89] "coredns-7c65d6cfc9-w7p7w" [7ff9e37e-de38-4caa-b342-bd85b02357c1] Running
	I1213 19:03:29.621131   20291 system_pods.go:89] "csi-hostpath-attacher-0" [1fbc15fc-5d42-41f9-8790-47e42f716cc5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 19:03:29.621136   20291 system_pods.go:89] "csi-hostpath-resizer-0" [9331abab-a969-497c-a8ee-a6eb8d49d647] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 19:03:29.621143   20291 system_pods.go:89] "csi-hostpathplugin-zrvnk" [3e44db57-e7a0-4ad7-846c-6f034b87d938] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 19:03:29.621147   20291 system_pods.go:89] "etcd-addons-649719" [c50e5927-a88a-4246-9cac-d92cd80c8dc4] Running
	I1213 19:03:29.621152   20291 system_pods.go:89] "kube-apiserver-addons-649719" [a0d02add-130d-4c4b-9785-d22944023899] Running
	I1213 19:03:29.621156   20291 system_pods.go:89] "kube-controller-manager-addons-649719" [0f06f930-787a-4b89-9d21-62047d0ff6c9] Running
	I1213 19:03:29.621164   20291 system_pods.go:89] "kube-ingress-dns-minikube" [e406783b-1c28-4447-81fd-72cb0ef3b306] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 19:03:29.621168   20291 system_pods.go:89] "kube-proxy-zhqf7" [17cc9d6e-fee4-451f-a0d8-91ebf081f894] Running
	I1213 19:03:29.621175   20291 system_pods.go:89] "kube-scheduler-addons-649719" [d43e44ed-30af-4612-a992-3added273b60] Running
	I1213 19:03:29.621180   20291 system_pods.go:89] "metrics-server-84c5f94fbc-m8bmq" [19020284-7a06-4b3e-af82-964b038c6aea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 19:03:29.621186   20291 system_pods.go:89] "nvidia-device-plugin-daemonset-7scc7" [9ac38625-793e-41f6-85f0-ceb6f87c9f02] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 19:03:29.621197   20291 system_pods.go:89] "registry-5cc95cd69-pj78t" [ce97be6a-8047-4747-a0f2-aa19bd1ffd4e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 19:03:29.621205   20291 system_pods.go:89] "registry-proxy-q8msp" [831a22d5-3f2d-460b-a739-1e316400aebc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 19:03:29.621209   20291 system_pods.go:89] "snapshot-controller-56fcc65765-qddnd" [f8de8150-1a12-4a3a-9e2f-19b427174422] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 19:03:29.621215   20291 system_pods.go:89] "snapshot-controller-56fcc65765-zchf9" [d9385680-6ee6-4cd9-ab58-c0ab8290ac77] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 19:03:29.621219   20291 system_pods.go:89] "storage-provisioner" [bfe88593-e74e-4b8a-841d-81f2488dc9b4] Running
	I1213 19:03:29.621228   20291 system_pods.go:126] duration metric: took 206.513579ms to wait for k8s-apps to be running ...
	I1213 19:03:29.621235   20291 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 19:03:29.621274   20291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:03:29.638401   20291 system_svc.go:56] duration metric: took 17.154974ms WaitForService to wait for kubelet
	I1213 19:03:29.638429   20291 kubeadm.go:582] duration metric: took 13.126290634s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 19:03:29.638451   20291 node_conditions.go:102] verifying NodePressure condition ...
	I1213 19:03:29.716193   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:29.815545   20291 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 19:03:29.815574   20291 node_conditions.go:123] node cpu capacity is 2
	I1213 19:03:29.815588   20291 node_conditions.go:105] duration metric: took 177.131475ms to run NodePressure ...
	I1213 19:03:29.815600   20291 start.go:241] waiting for startup goroutines ...
	I1213 19:03:29.853655   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:29.929025   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:29.929380   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:30.204405   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:30.353515   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:30.428680   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:30.429085   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:30.705401   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:30.854659   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:30.928907   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:30.929497   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:31.204748   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:31.352740   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:31.431042   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:31.431332   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:31.704901   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:31.853787   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:31.929442   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:31.929569   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:32.204505   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:32.353587   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:32.429639   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:32.429868   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:32.703964   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:32.853068   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:32.928701   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:32.930034   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:33.205131   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:33.353750   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:33.431697   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:33.432480   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:33.704358   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:33.852933   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:33.928444   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:33.931163   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:34.204961   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:34.352648   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:34.429576   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:34.429821   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:34.703937   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:34.852800   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:34.928895   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:34.929605   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:35.309289   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:35.460014   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:35.460236   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:35.460777   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:35.705223   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:35.852601   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:35.929060   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:35.929404   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:36.204563   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:36.354011   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:36.429619   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:36.430429   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:36.704934   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:36.852435   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:36.929689   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:36.931031   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:37.204225   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:37.353838   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:37.430278   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:37.430484   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:37.706395   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:37.855542   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:37.929713   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:37.930012   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:38.204362   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:38.352969   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:38.428670   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:38.428678   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:38.703870   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:38.852624   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:38.928356   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:38.929194   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:39.204915   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:39.353394   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:39.428683   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:39.429344   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:39.704926   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:39.852875   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:39.928254   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:39.928566   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:40.205014   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:40.354299   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:40.428562   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:40.429077   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:40.704457   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:40.853312   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:40.929719   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:40.930291   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:41.204745   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:41.352651   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:41.429023   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:41.429449   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:41.705351   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:41.853796   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:41.929452   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:41.929797   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:42.204325   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:42.353008   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:42.429817   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:42.430569   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:42.704832   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:42.853998   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:42.928799   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:42.930343   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:43.205953   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:43.352411   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:43.429245   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:43.430574   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:43.704410   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:43.853482   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:43.929657   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:43.930104   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:44.204302   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:44.353358   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:44.428706   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:44.430358   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:44.704632   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:44.853684   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:44.929008   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:44.929453   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:45.204218   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:45.643538   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:45.644545   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:45.649803   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:45.705985   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:45.853884   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:45.931126   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:45.931414   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:46.204818   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:46.353083   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:46.430132   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:46.430198   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:46.705054   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:46.852775   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:46.929060   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:46.929102   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:47.204940   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:47.352731   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:47.428840   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:47.429316   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:47.704101   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:47.853260   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:47.928740   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:47.929645   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:48.204757   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:48.353672   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:48.428596   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:48.431043   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:48.704638   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:48.853663   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:48.930394   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:48.931103   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:49.204565   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:49.353483   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:49.428578   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:49.430147   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:49.706075   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:49.854180   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:49.929099   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:49.929428   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:50.205568   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:50.352905   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:50.428901   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:50.430052   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:50.704509   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:50.853413   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:50.928768   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:50.928825   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:51.204561   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:51.353452   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:51.429436   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:51.429953   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:51.704275   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:51.853490   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:51.928912   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:51.929823   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:52.205401   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:52.353085   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:52.428695   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:52.429096   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:52.704274   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:52.853809   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:52.929880   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:52.930129   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:53.204975   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:53.352717   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:53.428148   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:53.428375   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:53.707298   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:53.856725   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:53.929339   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:53.930593   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:54.205197   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:54.353092   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:54.429660   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:54.430177   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:54.704800   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:54.852574   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:54.928634   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:54.929087   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:55.206381   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:55.354793   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:55.428418   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:55.428578   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:55.705130   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:55.853121   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:55.928370   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:55.929162   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:56.204983   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:56.353614   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:56.429359   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:56.429897   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:56.704149   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:56.853799   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:56.929727   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:56.930487   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:57.204994   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:57.353069   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:57.428942   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:57.429855   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:57.704972   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:57.853146   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:57.953215   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:57.953505   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:58.205585   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:58.353176   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:58.428624   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:58.428947   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:58.704796   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:58.852508   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:58.928885   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:58.929305   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:59.205083   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:59.352902   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:59.428587   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:59.428874   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:03:59.703999   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:03:59.852457   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:03:59.929074   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:03:59.929798   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:00.204729   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:00.353890   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:00.428990   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:04:00.429427   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:00.704677   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:00.852579   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:00.928334   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:04:00.930076   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:01.204890   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:01.352754   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:01.428940   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:04:01.429118   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:01.704369   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:01.854832   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:01.928884   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:04:01.929129   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:02.205123   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:02.352941   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:02.430899   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:02.431373   20291 kapi.go:107] duration metric: took 38.006613146s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 19:04:02.704871   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:02.852461   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:02.929473   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:03.205321   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:03.353036   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:03.428662   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:03.704632   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:03.853967   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:03.928997   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:04.204623   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:04.356198   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:04.429884   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:04.705534   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:04.853847   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:04.928999   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:05.205019   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:05.781493   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:05.781576   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:05.782625   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:05.853853   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:05.929468   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:06.205471   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:06.355977   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:06.454052   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:06.705025   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:06.853507   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:06.929078   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:07.224973   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:07.352959   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:07.429579   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:07.704772   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:07.854015   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:07.929641   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:08.204967   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:08.353702   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:08.428592   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:08.704617   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:08.853819   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:08.929383   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:09.204491   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:09.353171   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:09.428670   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:09.704671   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:09.854712   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:09.929110   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:10.204767   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:10.355626   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:10.428705   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:10.704035   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:10.852699   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:10.929001   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:11.204619   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:11.354010   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:11.429061   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:11.704704   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:11.853527   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:11.928548   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:12.205119   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:12.352764   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:12.428501   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:12.704510   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:12.853367   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:12.928995   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:13.204438   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:13.354569   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:13.429031   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:13.704459   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:13.853337   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:13.928475   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:14.205040   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:14.352611   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:14.428946   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:14.704135   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:14.853727   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:14.929076   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:15.205955   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:15.822396   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:15.823208   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:15.823715   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:15.855388   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:15.929585   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:16.205103   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:16.354553   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:16.431323   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:16.705745   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:16.854287   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:16.929088   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:17.204844   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:17.354155   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:17.429122   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:17.704053   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:17.852649   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:18.096819   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:18.204895   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:18.354568   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:18.455992   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:18.705302   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:18.853176   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:18.928955   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:19.205180   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:19.353256   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:19.429180   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:19.704775   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:19.853578   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:19.930799   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:20.204460   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:20.353301   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:20.429206   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:20.704678   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:20.853470   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:20.929112   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:21.204641   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:21.371782   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:21.433678   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:21.705318   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:21.853119   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:21.929488   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:22.205093   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:22.354577   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:22.429028   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:22.704318   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:22.852824   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:22.928929   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:23.205218   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:23.355097   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:23.566314   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:23.704927   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:23.852935   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:23.953057   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:24.205024   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:24.352747   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:24.430749   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:24.704910   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:24.853368   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:24.929305   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:25.205125   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:25.353540   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:25.431622   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:25.705428   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:25.853339   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:25.928840   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:26.204501   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:26.353533   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:26.432198   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:26.705249   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:26.853655   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:26.930063   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:27.204572   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:27.353601   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:27.429676   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:27.704282   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:27.852708   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:27.930310   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:28.205536   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:28.353754   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:28.428505   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:28.705659   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:28.857318   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:28.958509   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:29.224252   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:29.353307   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:29.429186   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:29.704895   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:29.852288   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:29.929233   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:30.205075   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:30.353331   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:30.429323   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:31.052660   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:31.053507   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:31.054326   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:31.205167   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:31.352809   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:31.428504   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:31.705513   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:31.853129   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:31.928814   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:32.205023   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:32.352829   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:32.428668   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:32.704853   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:32.852935   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:32.953564   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:33.205276   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:33.353407   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:33.429280   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:33.848765   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:33.857295   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:33.928908   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:34.204187   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:34.353465   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:34.429186   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:34.704562   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:34.853494   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:34.929653   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:35.207370   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:35.356844   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:35.428879   20291 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:04:35.711480   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:35.853291   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:35.929071   20291 kapi.go:107] duration metric: took 1m11.504230916s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 19:04:36.211009   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:36.358778   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:36.704201   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:37.065031   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:37.260292   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:37.362693   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:37.705491   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:37.853639   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:38.205111   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:38.353597   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:38.704528   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:38.862910   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:39.204175   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:39.358321   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:39.704838   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:39.852755   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:40.204654   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:40.353224   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:40.704675   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:04:40.854916   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:41.204668   20291 kapi.go:107] duration metric: took 1m14.003465191s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 19:04:41.206366   20291 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-649719 cluster.
	I1213 19:04:41.207678   20291 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 19:04:41.208809   20291 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 19:04:41.361895   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:41.854462   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:42.352834   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:42.854643   20291 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:04:43.353840   20291 kapi.go:107] duration metric: took 1m17.505280005s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 19:04:43.355778   20291 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, metrics-server, inspektor-gadget, nvidia-device-plugin, ingress-dns, storage-provisioner, amd-gpu-device-plugin, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1213 19:04:43.357115   20291 addons.go:510] duration metric: took 1m26.844938547s for enable addons: enabled=[cloud-spanner default-storageclass metrics-server inspektor-gadget nvidia-device-plugin ingress-dns storage-provisioner amd-gpu-device-plugin yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1213 19:04:43.357155   20291 start.go:246] waiting for cluster config update ...
	I1213 19:04:43.357172   20291 start.go:255] writing updated cluster config ...
	I1213 19:04:43.357408   20291 ssh_runner.go:195] Run: rm -f paused
	I1213 19:04:43.406100   20291 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1213 19:04:43.407774   20291 out.go:177] * Done! kubectl is now configured to use "addons-649719" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.515505562Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117049515478782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=adc9db39-1813-4dfa-bb59-e5003eea30bf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.516011460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5fd0ff57-d5b4-43e5-a5f1-3f65747752b8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.516067264Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5fd0ff57-d5b4-43e5-a5f1-3f65747752b8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.516326767Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0084b577a2be4f5b577b2280be0e9f309c7900f8ac4a1827654a22b799b942ea,PodSandboxId:995ef3f8f3f23b1dc075220a0e67f07b03e006586e1b55c6a84be26e93fbe45c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1734116903205125623,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-j75hj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 136a8667-7817-4513-8b26-b79a9e43f9cc,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0ee3dc60bbdb5838d1bbac92c7409bbc6e77896db6b34a0d39fa9429ad801a,PodSandboxId:e84f15dce45f3b4fe90021792f3ae6d66ffc03599cfbe52a7881afd8d8ad2fee,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734116762758977686,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 80f99f80-07c5-4365-88c6-8a2b2e3453d1,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56d75527174b7eb2f5998810ab9794fa411781720ad144aaae93590e2d9b60ab,PodSandboxId:912ee752dd607f17d0fbe349f82b564c64180406a060ed91e7bf5e3b0a4edc91,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734116687516845729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82b39ce9-4061-4ed5-b
c86-ef917d598ff0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:263fd07c67c848f707f938783c1239dab917cb84002245c3d6586cdda41a5b73,PodSandboxId:6cff6e843cc0037494e83d40dc3fbd64788c0e7d52beb94801df90e3a3835e56,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1734116651203894825,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-m8bmq,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 19020284-7a06-4b3e-af82-964b038c6aea,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78979dd62cb5ca14d2efb6a774cddc05c9e6aedda81aefcd36a122957d230ee3,PodSandboxId:f5ce684e2db2b1b029ae8ad44e1fc6f13676de0774cd5f47ab724a3179e3df72,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734116637816827935,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pwrjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd61049-3892-4422-bb65-27b37c47bafb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bdc3b6f210cbf3a0f57270b1d9331971e3412a2ccd49546898c0fa2f41551d,PodSandboxId:da97e93fd1bf4c0b1fdf136afd0cb49ec2dbe737007fff61409dc1abcfc8f20c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734116602887220365,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe88593-e74e-4b8a-841d-81f2488dc9b4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c0a3ba6ea0fcaf473e296de0f01d60aa9cabc908c3cc23a0636c9885738e575,PodSandboxId:b0cacdc5e5b3fc26d1bc7d46ec3843b5f6754827e32a2fb4c21714b9704dd351,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734116599447636348,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w7p7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff9e37e-de38-4caa-b342-bd85b02357c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fb13faad0ab0a668c97b4d6313597c7e671faf4950d8722f7a12d14331fecb,PodSandboxId:467fc785ceb6d5cad54d5876ec2883f5185ea30522a4d1500821be277d589664,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1734116597412211709,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhqf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17cc9d6e-fee4-451f-a0d8-91ebf081f894,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0533f80981943f87cc95d251a38f03699f36e780158a22dcee9a832187925fd4,PodSandboxId:29262b8325138176a141fb98030d21abff1d8cc2d10d37b924138e10077ec4ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1734116586352035263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58dc36838bbf299e1c66f9ab610eaa1b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce65a54464d909b9d7341a915c84ab188c43d7a34e17ea9d7112a0db0b2089e6,PodSandboxId:58dd99b1ffe8ca22e5e63a79b26e6e57459eb7cbf65522a26933fdafcfc2e0c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e
294dd2856,State:CONTAINER_RUNNING,CreatedAt:1734116586356886213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764466eb2bbb72ea386c13d5dd92f164,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0d6029ca6340f2987f735fb5297cbfae0572cf198d2ad19a0cec9a347e6ca5,PodSandboxId:40e4411e40c5ac08b95a632d29da6b319fdca424a2bafc56d40682893aca1869,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:C
ONTAINER_RUNNING,CreatedAt:1734116586347834719,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7b3c4228efb62a65516b8dd00f1b04,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d9cac40167e3d15a7415dfbb79a5cf0eac1d9cb167d02a0a7196ddb02af395,PodSandboxId:0b1f2f7a4f9e9d60fac9e7a4e0cac8c6cf9edcc960b528e8f3a60ceca99595be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER
_RUNNING,CreatedAt:1734116586137750665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1507a6fd303286ee1ff706e9946458b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5fd0ff57-d5b4-43e5-a5f1-3f65747752b8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.551822580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=834071a4-75a9-4606-8f81-0d7b6adb0308 name=/runtime.v1.RuntimeService/Version
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.551897054Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=834071a4-75a9-4606-8f81-0d7b6adb0308 name=/runtime.v1.RuntimeService/Version
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.552856735Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57ec806c-1b81-4468-bd85-6b0b3132689c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.554018148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117049553991436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57ec806c-1b81-4468-bd85-6b0b3132689c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.554645745Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7fe8f2bb-5aa5-4a66-b233-723d212d2217 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.554704160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7fe8f2bb-5aa5-4a66-b233-723d212d2217 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.554958242Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0084b577a2be4f5b577b2280be0e9f309c7900f8ac4a1827654a22b799b942ea,PodSandboxId:995ef3f8f3f23b1dc075220a0e67f07b03e006586e1b55c6a84be26e93fbe45c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1734116903205125623,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-j75hj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 136a8667-7817-4513-8b26-b79a9e43f9cc,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0ee3dc60bbdb5838d1bbac92c7409bbc6e77896db6b34a0d39fa9429ad801a,PodSandboxId:e84f15dce45f3b4fe90021792f3ae6d66ffc03599cfbe52a7881afd8d8ad2fee,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734116762758977686,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 80f99f80-07c5-4365-88c6-8a2b2e3453d1,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56d75527174b7eb2f5998810ab9794fa411781720ad144aaae93590e2d9b60ab,PodSandboxId:912ee752dd607f17d0fbe349f82b564c64180406a060ed91e7bf5e3b0a4edc91,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734116687516845729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82b39ce9-4061-4ed5-b
c86-ef917d598ff0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:263fd07c67c848f707f938783c1239dab917cb84002245c3d6586cdda41a5b73,PodSandboxId:6cff6e843cc0037494e83d40dc3fbd64788c0e7d52beb94801df90e3a3835e56,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1734116651203894825,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-m8bmq,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 19020284-7a06-4b3e-af82-964b038c6aea,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78979dd62cb5ca14d2efb6a774cddc05c9e6aedda81aefcd36a122957d230ee3,PodSandboxId:f5ce684e2db2b1b029ae8ad44e1fc6f13676de0774cd5f47ab724a3179e3df72,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734116637816827935,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pwrjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd61049-3892-4422-bb65-27b37c47bafb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bdc3b6f210cbf3a0f57270b1d9331971e3412a2ccd49546898c0fa2f41551d,PodSandboxId:da97e93fd1bf4c0b1fdf136afd0cb49ec2dbe737007fff61409dc1abcfc8f20c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734116602887220365,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe88593-e74e-4b8a-841d-81f2488dc9b4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c0a3ba6ea0fcaf473e296de0f01d60aa9cabc908c3cc23a0636c9885738e575,PodSandboxId:b0cacdc5e5b3fc26d1bc7d46ec3843b5f6754827e32a2fb4c21714b9704dd351,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734116599447636348,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w7p7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff9e37e-de38-4caa-b342-bd85b02357c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fb13faad0ab0a668c97b4d6313597c7e671faf4950d8722f7a12d14331fecb,PodSandboxId:467fc785ceb6d5cad54d5876ec2883f5185ea30522a4d1500821be277d589664,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1734116597412211709,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhqf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17cc9d6e-fee4-451f-a0d8-91ebf081f894,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0533f80981943f87cc95d251a38f03699f36e780158a22dcee9a832187925fd4,PodSandboxId:29262b8325138176a141fb98030d21abff1d8cc2d10d37b924138e10077ec4ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1734116586352035263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58dc36838bbf299e1c66f9ab610eaa1b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce65a54464d909b9d7341a915c84ab188c43d7a34e17ea9d7112a0db0b2089e6,PodSandboxId:58dd99b1ffe8ca22e5e63a79b26e6e57459eb7cbf65522a26933fdafcfc2e0c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e
294dd2856,State:CONTAINER_RUNNING,CreatedAt:1734116586356886213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764466eb2bbb72ea386c13d5dd92f164,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0d6029ca6340f2987f735fb5297cbfae0572cf198d2ad19a0cec9a347e6ca5,PodSandboxId:40e4411e40c5ac08b95a632d29da6b319fdca424a2bafc56d40682893aca1869,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:C
ONTAINER_RUNNING,CreatedAt:1734116586347834719,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7b3c4228efb62a65516b8dd00f1b04,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d9cac40167e3d15a7415dfbb79a5cf0eac1d9cb167d02a0a7196ddb02af395,PodSandboxId:0b1f2f7a4f9e9d60fac9e7a4e0cac8c6cf9edcc960b528e8f3a60ceca99595be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER
_RUNNING,CreatedAt:1734116586137750665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1507a6fd303286ee1ff706e9946458b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7fe8f2bb-5aa5-4a66-b233-723d212d2217 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.591260350Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81a0a1ec-1028-452c-9e70-0a466808a509 name=/runtime.v1.RuntimeService/Version
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.591332592Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81a0a1ec-1028-452c-9e70-0a466808a509 name=/runtime.v1.RuntimeService/Version
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.592302068Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a214c3ef-5eb1-4676-ae51-bdf992f17b98 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.593880291Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117049593855319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a214c3ef-5eb1-4676-ae51-bdf992f17b98 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.594326540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4cd9ac8-0a4a-4946-bb48-c905fd04a4b4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.594404374Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4cd9ac8-0a4a-4946-bb48-c905fd04a4b4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.594750957Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0084b577a2be4f5b577b2280be0e9f309c7900f8ac4a1827654a22b799b942ea,PodSandboxId:995ef3f8f3f23b1dc075220a0e67f07b03e006586e1b55c6a84be26e93fbe45c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1734116903205125623,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-j75hj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 136a8667-7817-4513-8b26-b79a9e43f9cc,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0ee3dc60bbdb5838d1bbac92c7409bbc6e77896db6b34a0d39fa9429ad801a,PodSandboxId:e84f15dce45f3b4fe90021792f3ae6d66ffc03599cfbe52a7881afd8d8ad2fee,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734116762758977686,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 80f99f80-07c5-4365-88c6-8a2b2e3453d1,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56d75527174b7eb2f5998810ab9794fa411781720ad144aaae93590e2d9b60ab,PodSandboxId:912ee752dd607f17d0fbe349f82b564c64180406a060ed91e7bf5e3b0a4edc91,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734116687516845729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82b39ce9-4061-4ed5-b
c86-ef917d598ff0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:263fd07c67c848f707f938783c1239dab917cb84002245c3d6586cdda41a5b73,PodSandboxId:6cff6e843cc0037494e83d40dc3fbd64788c0e7d52beb94801df90e3a3835e56,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1734116651203894825,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-m8bmq,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 19020284-7a06-4b3e-af82-964b038c6aea,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78979dd62cb5ca14d2efb6a774cddc05c9e6aedda81aefcd36a122957d230ee3,PodSandboxId:f5ce684e2db2b1b029ae8ad44e1fc6f13676de0774cd5f47ab724a3179e3df72,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734116637816827935,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pwrjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd61049-3892-4422-bb65-27b37c47bafb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bdc3b6f210cbf3a0f57270b1d9331971e3412a2ccd49546898c0fa2f41551d,PodSandboxId:da97e93fd1bf4c0b1fdf136afd0cb49ec2dbe737007fff61409dc1abcfc8f20c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734116602887220365,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe88593-e74e-4b8a-841d-81f2488dc9b4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c0a3ba6ea0fcaf473e296de0f01d60aa9cabc908c3cc23a0636c9885738e575,PodSandboxId:b0cacdc5e5b3fc26d1bc7d46ec3843b5f6754827e32a2fb4c21714b9704dd351,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734116599447636348,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w7p7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff9e37e-de38-4caa-b342-bd85b02357c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fb13faad0ab0a668c97b4d6313597c7e671faf4950d8722f7a12d14331fecb,PodSandboxId:467fc785ceb6d5cad54d5876ec2883f5185ea30522a4d1500821be277d589664,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1734116597412211709,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhqf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17cc9d6e-fee4-451f-a0d8-91ebf081f894,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0533f80981943f87cc95d251a38f03699f36e780158a22dcee9a832187925fd4,PodSandboxId:29262b8325138176a141fb98030d21abff1d8cc2d10d37b924138e10077ec4ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1734116586352035263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58dc36838bbf299e1c66f9ab610eaa1b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce65a54464d909b9d7341a915c84ab188c43d7a34e17ea9d7112a0db0b2089e6,PodSandboxId:58dd99b1ffe8ca22e5e63a79b26e6e57459eb7cbf65522a26933fdafcfc2e0c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e
294dd2856,State:CONTAINER_RUNNING,CreatedAt:1734116586356886213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764466eb2bbb72ea386c13d5dd92f164,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0d6029ca6340f2987f735fb5297cbfae0572cf198d2ad19a0cec9a347e6ca5,PodSandboxId:40e4411e40c5ac08b95a632d29da6b319fdca424a2bafc56d40682893aca1869,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:C
ONTAINER_RUNNING,CreatedAt:1734116586347834719,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7b3c4228efb62a65516b8dd00f1b04,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d9cac40167e3d15a7415dfbb79a5cf0eac1d9cb167d02a0a7196ddb02af395,PodSandboxId:0b1f2f7a4f9e9d60fac9e7a4e0cac8c6cf9edcc960b528e8f3a60ceca99595be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER
_RUNNING,CreatedAt:1734116586137750665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1507a6fd303286ee1ff706e9946458b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4cd9ac8-0a4a-4946-bb48-c905fd04a4b4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.627222057Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8216a49-0789-4dcd-8e27-284f724fc4e9 name=/runtime.v1.RuntimeService/Version
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.627307843Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8216a49-0789-4dcd-8e27-284f724fc4e9 name=/runtime.v1.RuntimeService/Version
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.628307654Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9240de1f-db4b-44e7-a252-48b2e185de44 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.629544598Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117049629518947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9240de1f-db4b-44e7-a252-48b2e185de44 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.630173618Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=12e4b476-1070-44bd-a471-4e976c4d5550 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.630230729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=12e4b476-1070-44bd-a471-4e976c4d5550 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 19:10:49 addons-649719 crio[659]: time="2024-12-13 19:10:49.630542996Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0084b577a2be4f5b577b2280be0e9f309c7900f8ac4a1827654a22b799b942ea,PodSandboxId:995ef3f8f3f23b1dc075220a0e67f07b03e006586e1b55c6a84be26e93fbe45c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1734116903205125623,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-j75hj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 136a8667-7817-4513-8b26-b79a9e43f9cc,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0ee3dc60bbdb5838d1bbac92c7409bbc6e77896db6b34a0d39fa9429ad801a,PodSandboxId:e84f15dce45f3b4fe90021792f3ae6d66ffc03599cfbe52a7881afd8d8ad2fee,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734116762758977686,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 80f99f80-07c5-4365-88c6-8a2b2e3453d1,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56d75527174b7eb2f5998810ab9794fa411781720ad144aaae93590e2d9b60ab,PodSandboxId:912ee752dd607f17d0fbe349f82b564c64180406a060ed91e7bf5e3b0a4edc91,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734116687516845729,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82b39ce9-4061-4ed5-b
c86-ef917d598ff0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:263fd07c67c848f707f938783c1239dab917cb84002245c3d6586cdda41a5b73,PodSandboxId:6cff6e843cc0037494e83d40dc3fbd64788c0e7d52beb94801df90e3a3835e56,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1734116651203894825,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-m8bmq,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 19020284-7a06-4b3e-af82-964b038c6aea,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78979dd62cb5ca14d2efb6a774cddc05c9e6aedda81aefcd36a122957d230ee3,PodSandboxId:f5ce684e2db2b1b029ae8ad44e1fc6f13676de0774cd5f47ab724a3179e3df72,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734116637816827935,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pwrjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd61049-3892-4422-bb65-27b37c47bafb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bdc3b6f210cbf3a0f57270b1d9331971e3412a2ccd49546898c0fa2f41551d,PodSandboxId:da97e93fd1bf4c0b1fdf136afd0cb49ec2dbe737007fff61409dc1abcfc8f20c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734116602887220365,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe88593-e74e-4b8a-841d-81f2488dc9b4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c0a3ba6ea0fcaf473e296de0f01d60aa9cabc908c3cc23a0636c9885738e575,PodSandboxId:b0cacdc5e5b3fc26d1bc7d46ec3843b5f6754827e32a2fb4c21714b9704dd351,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734116599447636348,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w7p7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff9e37e-de38-4caa-b342-bd85b02357c1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1fb13faad0ab0a668c97b4d6313597c7e671faf4950d8722f7a12d14331fecb,PodSandboxId:467fc785ceb6d5cad54d5876ec2883f5185ea30522a4d1500821be277d589664,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1734116597412211709,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zhqf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17cc9d6e-fee4-451f-a0d8-91ebf081f894,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0533f80981943f87cc95d251a38f03699f36e780158a22dcee9a832187925fd4,PodSandboxId:29262b8325138176a141fb98030d21abff1d8cc2d10d37b924138e10077ec4ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1734116586352035263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58dc36838bbf299e1c66f9ab610eaa1b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce65a54464d909b9d7341a915c84ab188c43d7a34e17ea9d7112a0db0b2089e6,PodSandboxId:58dd99b1ffe8ca22e5e63a79b26e6e57459eb7cbf65522a26933fdafcfc2e0c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e
294dd2856,State:CONTAINER_RUNNING,CreatedAt:1734116586356886213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764466eb2bbb72ea386c13d5dd92f164,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0d6029ca6340f2987f735fb5297cbfae0572cf198d2ad19a0cec9a347e6ca5,PodSandboxId:40e4411e40c5ac08b95a632d29da6b319fdca424a2bafc56d40682893aca1869,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:C
ONTAINER_RUNNING,CreatedAt:1734116586347834719,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7b3c4228efb62a65516b8dd00f1b04,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d9cac40167e3d15a7415dfbb79a5cf0eac1d9cb167d02a0a7196ddb02af395,PodSandboxId:0b1f2f7a4f9e9d60fac9e7a4e0cac8c6cf9edcc960b528e8f3a60ceca99595be,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER
_RUNNING,CreatedAt:1734116586137750665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-649719,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1507a6fd303286ee1ff706e9946458b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=12e4b476-1070-44bd-a471-4e976c4d5550 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0084b577a2be4       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   995ef3f8f3f23       hello-world-app-55bf9c44b4-j75hj
	7c0ee3dc60bbd       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         4 minutes ago       Running             nginx                     0                   e84f15dce45f3       nginx
	56d75527174b7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   912ee752dd607       busybox
	263fd07c67c84       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   6 minutes ago       Running             metrics-server            0                   6cff6e843cc00       metrics-server-84c5f94fbc-m8bmq
	78979dd62cb5c       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                6 minutes ago       Running             amd-gpu-device-plugin     0                   f5ce684e2db2b       amd-gpu-device-plugin-pwrjv
	c9bdc3b6f210c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   da97e93fd1bf4       storage-provisioner
	2c0a3ba6ea0fc       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        7 minutes ago       Running             coredns                   0                   b0cacdc5e5b3f       coredns-7c65d6cfc9-w7p7w
	a1fb13faad0ab       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        7 minutes ago       Running             kube-proxy                0                   467fc785ceb6d       kube-proxy-zhqf7
	ce65a54464d90       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        7 minutes ago       Running             kube-scheduler            0                   58dd99b1ffe8c       kube-scheduler-addons-649719
	0533f80981943       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   29262b8325138       etcd-addons-649719
	0f0d6029ca634       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        7 minutes ago       Running             kube-apiserver            0                   40e4411e40c5a       kube-apiserver-addons-649719
	72d9cac40167e       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        7 minutes ago       Running             kube-controller-manager   0                   0b1f2f7a4f9e9       kube-controller-manager-addons-649719
	
	
	==> coredns [2c0a3ba6ea0fcaf473e296de0f01d60aa9cabc908c3cc23a0636c9885738e575] <==
	[INFO] 10.244.0.22:56367 - 59746 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00006672s
	[INFO] 10.244.0.22:56367 - 42804 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000233354s
	[INFO] 10.244.0.22:52107 - 15749 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000070203s
	[INFO] 10.244.0.22:56367 - 11586 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000115641s
	[INFO] 10.244.0.22:52107 - 2533 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000069199s
	[INFO] 10.244.0.22:56367 - 62381 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000112664s
	[INFO] 10.244.0.22:52107 - 39211 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000073098s
	[INFO] 10.244.0.22:56367 - 14888 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000120499s
	[INFO] 10.244.0.22:52107 - 2938 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000374083s
	[INFO] 10.244.0.22:56367 - 33846 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000153971s
	[INFO] 10.244.0.22:52107 - 14538 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000057304s
	[INFO] 10.244.0.22:60773 - 29375 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000216343s
	[INFO] 10.244.0.22:34149 - 50387 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000057343s
	[INFO] 10.244.0.22:60773 - 36659 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000033948s
	[INFO] 10.244.0.22:60773 - 48385 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000042875s
	[INFO] 10.244.0.22:34149 - 26764 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000081824s
	[INFO] 10.244.0.22:60773 - 6941 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000064297s
	[INFO] 10.244.0.22:34149 - 22495 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000063766s
	[INFO] 10.244.0.22:60773 - 33515 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057331s
	[INFO] 10.244.0.22:34149 - 38533 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000076636s
	[INFO] 10.244.0.22:60773 - 58234 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00011738s
	[INFO] 10.244.0.22:34149 - 22675 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000529s
	[INFO] 10.244.0.22:60773 - 2626 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00004408s
	[INFO] 10.244.0.22:34149 - 57399 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004266s
	[INFO] 10.244.0.22:34149 - 52021 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060339s
	
	
	==> describe nodes <==
	Name:               addons-649719
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-649719
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956
	                    minikube.k8s.io/name=addons-649719
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_13T19_03_11_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-649719
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Dec 2024 19:03:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-649719
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Dec 2024 19:10:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Dec 2024 19:08:47 +0000   Fri, 13 Dec 2024 19:03:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Dec 2024 19:08:47 +0000   Fri, 13 Dec 2024 19:03:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Dec 2024 19:08:47 +0000   Fri, 13 Dec 2024 19:03:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Dec 2024 19:08:47 +0000   Fri, 13 Dec 2024 19:03:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.191
	  Hostname:    addons-649719
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 ec861977a6ee432faa82b25b478a8504
	  System UUID:                ec861977-a6ee-432f-aa82-b25b478a8504
	  Boot ID:                    56bfbfda-6224-405b-9d0d-89e8546fb391
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  default                     hello-world-app-55bf9c44b4-j75hj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 amd-gpu-device-plugin-pwrjv              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 coredns-7c65d6cfc9-w7p7w                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m33s
	  kube-system                 etcd-addons-649719                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m40s
	  kube-system                 kube-apiserver-addons-649719             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m40s
	  kube-system                 kube-controller-manager-addons-649719    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m38s
	  kube-system                 kube-proxy-zhqf7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	  kube-system                 kube-scheduler-addons-649719             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m40s
	  kube-system                 metrics-server-84c5f94fbc-m8bmq          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m28s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m31s  kube-proxy       
	  Normal  Starting                 7m39s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m38s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m38s  kubelet          Node addons-649719 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m38s  kubelet          Node addons-649719 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m38s  kubelet          Node addons-649719 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m38s  kubelet          Node addons-649719 status is now: NodeReady
	  Normal  RegisteredNode           7m34s  node-controller  Node addons-649719 event: Registered Node addons-649719 in Controller
	  Normal  CIDRAssignmentFailed     7m34s  cidrAllocator    Node addons-649719 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +5.980879] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.083104] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.779535] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	[  +0.145779] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.097100] kauditd_printk_skb: 135 callbacks suppressed
	[  +5.115071] kauditd_printk_skb: 136 callbacks suppressed
	[ +10.175203] kauditd_printk_skb: 69 callbacks suppressed
	[ +20.695745] kauditd_printk_skb: 2 callbacks suppressed
	[Dec13 19:04] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.085162] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.177172] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.635763] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.281857] kauditd_printk_skb: 30 callbacks suppressed
	[  +7.113327] kauditd_printk_skb: 16 callbacks suppressed
	[Dec13 19:05] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.019709] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.171411] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.451898] kauditd_printk_skb: 55 callbacks suppressed
	[ +11.735737] kauditd_printk_skb: 31 callbacks suppressed
	[ +10.520037] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.563295] kauditd_printk_skb: 7 callbacks suppressed
	[Dec13 19:06] kauditd_printk_skb: 27 callbacks suppressed
	[  +9.664458] kauditd_printk_skb: 9 callbacks suppressed
	[Dec13 19:08] kauditd_printk_skb: 49 callbacks suppressed
	[  +6.953300] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [0533f80981943f87cc95d251a38f03699f36e780158a22dcee9a832187925fd4] <==
	{"level":"warn","ts":"2024-12-13T19:04:31.039128Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-13T19:04:30.693319Z","time spent":"345.802438ms","remote":"127.0.0.1:40898","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-12-13T19:04:31.039253Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.151617ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:04:31.039267Z","caller":"traceutil/trace.go:171","msg":"trace[1284115171] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1069; }","duration":"295.167517ms","start":"2024-12-13T19:04:30.744094Z","end":"2024-12-13T19:04:31.039262Z","steps":["trace[1284115171] 'range keys from in-memory index tree'  (duration: 295.144984ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:04:31.039332Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.499742ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:04:31.039342Z","caller":"traceutil/trace.go:171","msg":"trace[396916360] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1069; }","duration":"198.51089ms","start":"2024-12-13T19:04:30.840828Z","end":"2024-12-13T19:04:31.039339Z","steps":["trace[396916360] 'range keys from in-memory index tree'  (duration: 198.46325ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:04:31.039531Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.567988ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:04:31.039549Z","caller":"traceutil/trace.go:171","msg":"trace[1110299860] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1069; }","duration":"122.588336ms","start":"2024-12-13T19:04:30.916956Z","end":"2024-12-13T19:04:31.039544Z","steps":["trace[1110299860] 'range keys from in-memory index tree'  (duration: 122.492188ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:04:33.832324Z","caller":"traceutil/trace.go:171","msg":"trace[81353000] transaction","detail":"{read_only:false; response_revision:1079; number_of_response:1; }","duration":"261.83289ms","start":"2024-12-13T19:04:33.570479Z","end":"2024-12-13T19:04:33.832312Z","steps":["trace[81353000] 'process raft request'  (duration: 261.526429ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:04:33.833481Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.467685ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:04:33.833528Z","caller":"traceutil/trace.go:171","msg":"trace[1146071498] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1079; }","duration":"152.567682ms","start":"2024-12-13T19:04:33.680951Z","end":"2024-12-13T19:04:33.833519Z","steps":["trace[1146071498] 'agreement among raft nodes before linearized reading'  (duration: 152.443856ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:04:33.832112Z","caller":"traceutil/trace.go:171","msg":"trace[175880336] linearizableReadLoop","detail":"{readStateIndex:1112; appliedIndex:1111; }","duration":"151.125009ms","start":"2024-12-13T19:04:33.680973Z","end":"2024-12-13T19:04:33.832098Z","steps":["trace[175880336] 'read index received'  (duration: 150.991367ms)","trace[175880336] 'applied index is now lower than readState.Index'  (duration: 133.232µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-13T19:04:33.834708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.057875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:04:33.834815Z","caller":"traceutil/trace.go:171","msg":"trace[1053988517] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1079; }","duration":"141.229163ms","start":"2024-12-13T19:04:33.693578Z","end":"2024-12-13T19:04:33.834807Z","steps":["trace[1053988517] 'agreement among raft nodes before linearized reading'  (duration: 141.040802ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:04:37.041959Z","caller":"traceutil/trace.go:171","msg":"trace[564859694] transaction","detail":"{read_only:false; response_revision:1096; number_of_response:1; }","duration":"329.347059ms","start":"2024-12-13T19:04:36.712584Z","end":"2024-12-13T19:04:37.041931Z","steps":["trace[564859694] 'process raft request'  (duration: 329.155796ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:04:37.042388Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-13T19:04:36.712561Z","time spent":"329.4448ms","remote":"127.0.0.1:40992","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":486,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:0 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:427 >> failure:<>"}
	{"level":"info","ts":"2024-12-13T19:04:37.042853Z","caller":"traceutil/trace.go:171","msg":"trace[169519707] linearizableReadLoop","detail":"{readStateIndex:1130; appliedIndex:1130; }","duration":"298.75709ms","start":"2024-12-13T19:04:36.744086Z","end":"2024-12-13T19:04:37.042843Z","steps":["trace[169519707] 'read index received'  (duration: 298.725627ms)","trace[169519707] 'applied index is now lower than readState.Index'  (duration: 30.668µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-13T19:04:37.042966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"298.867773ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:04:37.042985Z","caller":"traceutil/trace.go:171","msg":"trace[1925101956] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1096; }","duration":"298.897087ms","start":"2024-12-13T19:04:36.744082Z","end":"2024-12-13T19:04:37.042979Z","steps":["trace[1925101956] 'agreement among raft nodes before linearized reading'  (duration: 298.805831ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:04:37.044681Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.96722ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:04:37.044815Z","caller":"traceutil/trace.go:171","msg":"trace[401929220] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1097; }","duration":"204.151491ms","start":"2024-12-13T19:04:36.840655Z","end":"2024-12-13T19:04:37.044807Z","steps":["trace[401929220] 'agreement among raft nodes before linearized reading'  (duration: 203.873124ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:04:37.045195Z","caller":"traceutil/trace.go:171","msg":"trace[2006328592] transaction","detail":"{read_only:false; response_revision:1097; number_of_response:1; }","duration":"300.464134ms","start":"2024-12-13T19:04:36.744676Z","end":"2024-12-13T19:04:37.045140Z","steps":["trace[2006328592] 'process raft request'  (duration: 299.728571ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:04:37.045326Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-13T19:04:36.744660Z","time spent":"300.632769ms","remote":"127.0.0.1:40802","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":782,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-controller-5f85ff4588-k6775.1810d1ee041cd097\" mod_revision:0 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-5f85ff4588-k6775.1810d1ee041cd097\" value_size:675 lease:419560174168363166 >> failure:<>"}
	{"level":"warn","ts":"2024-12-13T19:05:19.787043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.478421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:05:19.787195Z","caller":"traceutil/trace.go:171","msg":"trace[460394353] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1346; }","duration":"105.668748ms","start":"2024-12-13T19:05:19.681497Z","end":"2024-12-13T19:05:19.787166Z","steps":["trace[460394353] 'range keys from in-memory index tree'  (duration: 105.430044ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:05:49.209046Z","caller":"traceutil/trace.go:171","msg":"trace[2078710238] transaction","detail":"{read_only:false; response_revision:1564; number_of_response:1; }","duration":"102.318653ms","start":"2024-12-13T19:05:49.106707Z","end":"2024-12-13T19:05:49.209026Z","steps":["trace[2078710238] 'process raft request'  (duration: 102.218983ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:10:49 up 8 min,  0 users,  load average: 0.08, 0.70, 0.51
	Linux addons-649719 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0f0d6029ca6340f2987f735fb5297cbfae0572cf198d2ad19a0cec9a347e6ca5] <==
	 > logger="UnhandledError"
	E1213 19:05:12.354242       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.108.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.108.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.108.150:443: connect: connection refused" logger="UnhandledError"
	E1213 19:05:12.359210       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.108.150:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.108.150:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.108.150:443: connect: connection refused" logger="UnhandledError"
	I1213 19:05:12.429727       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1213 19:05:14.047347       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.7.186"}
	I1213 19:05:40.553041       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1213 19:05:41.578348       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E1213 19:05:42.927361       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1213 19:05:57.732783       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1213 19:05:57.985110       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.171.190"}
	I1213 19:05:58.662150       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1213 19:06:13.820268       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:06:13.820462       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:06:13.852925       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:06:13.853068       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:06:13.853935       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:06:13.854034       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:06:13.867062       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:06:13.870543       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:06:13.900819       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:06:13.900855       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1213 19:06:14.854622       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1213 19:06:14.901242       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1213 19:06:14.992194       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1213 19:08:20.410869       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.96.29"}
	
	
	==> kube-controller-manager [72d9cac40167e3d15a7415dfbb79a5cf0eac1d9cb167d02a0a7196ddb02af395] <==
	I1213 19:08:47.954131       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-649719"
	W1213 19:08:53.517098       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:08:53.517162       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:08:53.742184       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:08:53.742254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:08:56.930408       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:08:56.930494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:08:57.368578       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:08:57.368625       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:09:29.553568       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:09:29.553749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:09:41.836383       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:09:41.836462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:09:47.655825       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:09:47.655877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:09:48.764162       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:09:48.764285       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:10:21.982514       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:10:21.982648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:10:22.895589       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:10:22.895740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:10:32.881307       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:10:32.881482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:10:41.568085       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:10:41.568343       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [a1fb13faad0ab0a668c97b4d6313597c7e671faf4950d8722f7a12d14331fecb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1213 19:03:18.022649       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1213 19:03:18.040969       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.191"]
	E1213 19:03:18.041042       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 19:03:18.276891       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1213 19:03:18.276925       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 19:03:18.276959       1 server_linux.go:169] "Using iptables Proxier"
	I1213 19:03:18.281962       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 19:03:18.282174       1 server.go:483] "Version info" version="v1.31.2"
	I1213 19:03:18.282185       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:03:18.290371       1 config.go:199] "Starting service config controller"
	I1213 19:03:18.290393       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1213 19:03:18.290410       1 config.go:105] "Starting endpoint slice config controller"
	I1213 19:03:18.290414       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1213 19:03:18.290800       1 config.go:328] "Starting node config controller"
	I1213 19:03:18.290829       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1213 19:03:18.390518       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1213 19:03:18.390530       1 shared_informer.go:320] Caches are synced for service config
	I1213 19:03:18.390989       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ce65a54464d909b9d7341a915c84ab188c43d7a34e17ea9d7112a0db0b2089e6] <==
	W1213 19:03:08.654691       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1213 19:03:08.654864       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:08.656553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 19:03:08.656602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:08.656695       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 19:03:08.656722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:08.656785       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1213 19:03:08.656848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:08.656923       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 19:03:08.657000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:09.511119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1213 19:03:09.511223       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:09.516355       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 19:03:09.516588       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:09.630508       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1213 19:03:09.630562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:09.712131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 19:03:09.712329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:09.724759       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1213 19:03:09.725634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:09.766353       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1213 19:03:09.766406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:03:09.778405       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1213 19:03:09.778533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1213 19:03:10.142318       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 13 19:09:21 addons-649719 kubelet[1219]: E1213 19:09:21.381526    1219 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116961381172940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:09:21 addons-649719 kubelet[1219]: E1213 19:09:21.381901    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116961381172940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:09:31 addons-649719 kubelet[1219]: E1213 19:09:31.384013    1219 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116971383671785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:09:31 addons-649719 kubelet[1219]: E1213 19:09:31.384289    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116971383671785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:09:41 addons-649719 kubelet[1219]: E1213 19:09:41.387729    1219 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116981387378050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:09:41 addons-649719 kubelet[1219]: E1213 19:09:41.387767    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116981387378050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:09:42 addons-649719 kubelet[1219]: I1213 19:09:42.977856    1219 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 19:09:51 addons-649719 kubelet[1219]: E1213 19:09:51.391051    1219 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116991390740557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:09:51 addons-649719 kubelet[1219]: E1213 19:09:51.391089    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734116991390740557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:01 addons-649719 kubelet[1219]: E1213 19:10:01.394285    1219 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117001393980253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:01 addons-649719 kubelet[1219]: E1213 19:10:01.394742    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117001393980253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:11 addons-649719 kubelet[1219]: E1213 19:10:11.000012    1219 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 13 19:10:11 addons-649719 kubelet[1219]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 13 19:10:11 addons-649719 kubelet[1219]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 19:10:11 addons-649719 kubelet[1219]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 19:10:11 addons-649719 kubelet[1219]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 19:10:11 addons-649719 kubelet[1219]: E1213 19:10:11.397662    1219 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117011397354808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:11 addons-649719 kubelet[1219]: E1213 19:10:11.397718    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117011397354808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:21 addons-649719 kubelet[1219]: E1213 19:10:21.400859    1219 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117021400612416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:21 addons-649719 kubelet[1219]: E1213 19:10:21.400897    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117021400612416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:31 addons-649719 kubelet[1219]: E1213 19:10:31.403826    1219 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117031403246463,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:31 addons-649719 kubelet[1219]: E1213 19:10:31.404204    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117031403246463,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:34 addons-649719 kubelet[1219]: I1213 19:10:34.979273    1219 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-pwrjv" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 19:10:41 addons-649719 kubelet[1219]: E1213 19:10:41.409616    1219 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117041408935386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:10:41 addons-649719 kubelet[1219]: E1213 19:10:41.409687    1219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117041408935386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [c9bdc3b6f210cbf3a0f57270b1d9331971e3412a2ccd49546898c0fa2f41551d] <==
	I1213 19:03:23.346095       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 19:03:23.362073       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 19:03:23.362141       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 19:03:23.381211       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 19:03:23.381363       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-649719_ab1d4990-2777-474e-8af6-f35340671464!
	I1213 19:03:23.381405       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c96f0d72-a262-42ca-b0ef-d20683a4c492", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-649719_ab1d4990-2777-474e-8af6-f35340671464 became leader
	I1213 19:03:23.483543       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-649719_ab1d4990-2777-474e-8af6-f35340671464!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-649719 -n addons-649719
helpers_test.go:261: (dbg) Run:  kubectl --context addons-649719 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649719 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (324.78s)

                                                
                                    
x
+
TestPreload (288.57s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-089936 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1213 19:59:44.009838   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-089936 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m5.489672979s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-089936 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-089936 image pull gcr.io/k8s-minikube/busybox: (3.134930022s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-089936
E1213 20:00:41.732804   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-089936: (1m30.95176239s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-089936 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-089936 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m5.98902604s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-089936 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-12-13 20:02:32.195692104 +0000 UTC m=+3650.471185046
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-089936 -n test-preload-089936
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-089936 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-089936 logs -n 25: (1.015186981s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-352319 ssh -n                                                                 | multinode-352319     | jenkins | v1.34.0 | 13 Dec 24 19:45 UTC | 13 Dec 24 19:45 UTC |
	|         | multinode-352319-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-352319 ssh -n multinode-352319 sudo cat                                       | multinode-352319     | jenkins | v1.34.0 | 13 Dec 24 19:45 UTC | 13 Dec 24 19:45 UTC |
	|         | /home/docker/cp-test_multinode-352319-m03_multinode-352319.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-352319 cp multinode-352319-m03:/home/docker/cp-test.txt                       | multinode-352319     | jenkins | v1.34.0 | 13 Dec 24 19:45 UTC | 13 Dec 24 19:45 UTC |
	|         | multinode-352319-m02:/home/docker/cp-test_multinode-352319-m03_multinode-352319-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-352319 ssh -n                                                                 | multinode-352319     | jenkins | v1.34.0 | 13 Dec 24 19:45 UTC | 13 Dec 24 19:45 UTC |
	|         | multinode-352319-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-352319 ssh -n multinode-352319-m02 sudo cat                                   | multinode-352319     | jenkins | v1.34.0 | 13 Dec 24 19:45 UTC | 13 Dec 24 19:45 UTC |
	|         | /home/docker/cp-test_multinode-352319-m03_multinode-352319-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-352319 node stop m03                                                          | multinode-352319     | jenkins | v1.34.0 | 13 Dec 24 19:45 UTC | 13 Dec 24 19:45 UTC |
	| node    | multinode-352319 node start                                                             | multinode-352319     | jenkins | v1.34.0 | 13 Dec 24 19:45 UTC | 13 Dec 24 19:46 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-352319                                                                | multinode-352319     | jenkins | v1.34.0 | 13 Dec 24 19:46 UTC |                     |
	| stop    | -p multinode-352319                                                                     | multinode-352319     | jenkins | v1.34.0 | 13 Dec 24 19:46 UTC | 13 Dec 24 19:49 UTC |
	| start   | -p multinode-352319                                                                     | multinode-352319     | jenkins | v1.34.0 | 13 Dec 24 19:49 UTC | 13 Dec 24 19:52 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-352319                                                                | multinode-352319     | jenkins | v1.34.0 | 13 Dec 24 19:52 UTC |                     |
	| node    | multinode-352319 node delete                                                            | multinode-352319     | jenkins | v1.34.0 | 13 Dec 24 19:52 UTC | 13 Dec 24 19:52 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-352319 stop                                                                   | multinode-352319     | jenkins | v1.34.0 | 13 Dec 24 19:52 UTC | 13 Dec 24 19:55 UTC |
	| start   | -p multinode-352319                                                                     | multinode-352319     | jenkins | v1.34.0 | 13 Dec 24 19:55 UTC | 13 Dec 24 19:57 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-352319                                                                | multinode-352319     | jenkins | v1.34.0 | 13 Dec 24 19:57 UTC |                     |
	| start   | -p multinode-352319-m02                                                                 | multinode-352319-m02 | jenkins | v1.34.0 | 13 Dec 24 19:57 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-352319-m03                                                                 | multinode-352319-m03 | jenkins | v1.34.0 | 13 Dec 24 19:57 UTC | 13 Dec 24 19:57 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-352319                                                                 | multinode-352319     | jenkins | v1.34.0 | 13 Dec 24 19:57 UTC |                     |
	| delete  | -p multinode-352319-m03                                                                 | multinode-352319-m03 | jenkins | v1.34.0 | 13 Dec 24 19:57 UTC | 13 Dec 24 19:57 UTC |
	| delete  | -p multinode-352319                                                                     | multinode-352319     | jenkins | v1.34.0 | 13 Dec 24 19:57 UTC | 13 Dec 24 19:57 UTC |
	| start   | -p test-preload-089936                                                                  | test-preload-089936  | jenkins | v1.34.0 | 13 Dec 24 19:57 UTC | 13 Dec 24 19:59 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-089936 image pull                                                          | test-preload-089936  | jenkins | v1.34.0 | 13 Dec 24 19:59 UTC | 13 Dec 24 19:59 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-089936                                                                  | test-preload-089936  | jenkins | v1.34.0 | 13 Dec 24 19:59 UTC | 13 Dec 24 20:01 UTC |
	| start   | -p test-preload-089936                                                                  | test-preload-089936  | jenkins | v1.34.0 | 13 Dec 24 20:01 UTC | 13 Dec 24 20:02 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-089936 image list                                                          | test-preload-089936  | jenkins | v1.34.0 | 13 Dec 24 20:02 UTC | 13 Dec 24 20:02 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 20:01:26
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 20:01:26.023222   52214 out.go:345] Setting OutFile to fd 1 ...
	I1213 20:01:26.023326   52214 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:01:26.023337   52214 out.go:358] Setting ErrFile to fd 2...
	I1213 20:01:26.023342   52214 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:01:26.023512   52214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
	I1213 20:01:26.024042   52214 out.go:352] Setting JSON to false
	I1213 20:01:26.024952   52214 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6229,"bootTime":1734113857,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 20:01:26.025040   52214 start.go:139] virtualization: kvm guest
	I1213 20:01:26.028136   52214 out.go:177] * [test-preload-089936] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 20:01:26.029430   52214 notify.go:220] Checking for updates...
	I1213 20:01:26.029478   52214 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 20:01:26.030747   52214 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 20:01:26.031893   52214 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:01:26.032971   52214 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 20:01:26.034028   52214 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 20:01:26.035191   52214 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 20:01:26.036674   52214 config.go:182] Loaded profile config "test-preload-089936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1213 20:01:26.037046   52214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:01:26.037092   52214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:01:26.051685   52214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34049
	I1213 20:01:26.052099   52214 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:01:26.052688   52214 main.go:141] libmachine: Using API Version  1
	I1213 20:01:26.052719   52214 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:01:26.053018   52214 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:01:26.053157   52214 main.go:141] libmachine: (test-preload-089936) Calling .DriverName
	I1213 20:01:26.055062   52214 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1213 20:01:26.056180   52214 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 20:01:26.056454   52214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:01:26.056508   52214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:01:26.070689   52214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34205
	I1213 20:01:26.071122   52214 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:01:26.071567   52214 main.go:141] libmachine: Using API Version  1
	I1213 20:01:26.071590   52214 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:01:26.071908   52214 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:01:26.072085   52214 main.go:141] libmachine: (test-preload-089936) Calling .DriverName
	I1213 20:01:26.105826   52214 out.go:177] * Using the kvm2 driver based on existing profile
	I1213 20:01:26.107143   52214 start.go:297] selected driver: kvm2
	I1213 20:01:26.107156   52214 start.go:901] validating driver "kvm2" against &{Name:test-preload-089936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-089936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:01:26.107256   52214 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 20:01:26.108239   52214 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 20:01:26.108353   52214 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20090-12353/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 20:01:26.122698   52214 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1213 20:01:26.123103   52214 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 20:01:26.123133   52214 cni.go:84] Creating CNI manager for ""
	I1213 20:01:26.123180   52214 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:01:26.123234   52214 start.go:340] cluster config:
	{Name:test-preload-089936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-089936 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:01:26.123334   52214 iso.go:125] acquiring lock: {Name:mkd84f6661a5214d8c2d3a40ad448351a88bfd1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 20:01:26.124923   52214 out.go:177] * Starting "test-preload-089936" primary control-plane node in "test-preload-089936" cluster
	I1213 20:01:26.126146   52214 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1213 20:01:26.595431   52214 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1213 20:01:26.595456   52214 cache.go:56] Caching tarball of preloaded images
	I1213 20:01:26.595689   52214 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1213 20:01:26.597546   52214 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1213 20:01:26.598786   52214 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1213 20:01:26.698003   52214 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1213 20:01:38.589691   52214 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1213 20:01:38.589783   52214 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1213 20:01:39.429128   52214 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1213 20:01:39.429248   52214 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/test-preload-089936/config.json ...
	I1213 20:01:39.429491   52214 start.go:360] acquireMachinesLock for test-preload-089936: {Name:mkc278ae0927dbec7538ca4f7c13001e5f3abc49 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 20:01:39.429552   52214 start.go:364] duration metric: took 40.157µs to acquireMachinesLock for "test-preload-089936"
	I1213 20:01:39.429568   52214 start.go:96] Skipping create...Using existing machine configuration
	I1213 20:01:39.429573   52214 fix.go:54] fixHost starting: 
	I1213 20:01:39.429824   52214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:01:39.429856   52214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:01:39.444061   52214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46485
	I1213 20:01:39.444498   52214 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:01:39.445011   52214 main.go:141] libmachine: Using API Version  1
	I1213 20:01:39.445034   52214 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:01:39.445373   52214 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:01:39.445532   52214 main.go:141] libmachine: (test-preload-089936) Calling .DriverName
	I1213 20:01:39.445664   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetState
	I1213 20:01:39.447342   52214 fix.go:112] recreateIfNeeded on test-preload-089936: state=Stopped err=<nil>
	I1213 20:01:39.447367   52214 main.go:141] libmachine: (test-preload-089936) Calling .DriverName
	W1213 20:01:39.447544   52214 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 20:01:39.449447   52214 out.go:177] * Restarting existing kvm2 VM for "test-preload-089936" ...
	I1213 20:01:39.450620   52214 main.go:141] libmachine: (test-preload-089936) Calling .Start
	I1213 20:01:39.450772   52214 main.go:141] libmachine: (test-preload-089936) starting domain...
	I1213 20:01:39.450794   52214 main.go:141] libmachine: (test-preload-089936) ensuring networks are active...
	I1213 20:01:39.451464   52214 main.go:141] libmachine: (test-preload-089936) Ensuring network default is active
	I1213 20:01:39.451767   52214 main.go:141] libmachine: (test-preload-089936) Ensuring network mk-test-preload-089936 is active
	I1213 20:01:39.452070   52214 main.go:141] libmachine: (test-preload-089936) getting domain XML...
	I1213 20:01:39.452756   52214 main.go:141] libmachine: (test-preload-089936) creating domain...
	I1213 20:01:40.615901   52214 main.go:141] libmachine: (test-preload-089936) waiting for IP...
	I1213 20:01:40.616734   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:40.617091   52214 main.go:141] libmachine: (test-preload-089936) DBG | unable to find current IP address of domain test-preload-089936 in network mk-test-preload-089936
	I1213 20:01:40.617183   52214 main.go:141] libmachine: (test-preload-089936) DBG | I1213 20:01:40.617093   52299 retry.go:31] will retry after 264.590846ms: waiting for domain to come up
	I1213 20:01:40.883709   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:40.884140   52214 main.go:141] libmachine: (test-preload-089936) DBG | unable to find current IP address of domain test-preload-089936 in network mk-test-preload-089936
	I1213 20:01:40.884165   52214 main.go:141] libmachine: (test-preload-089936) DBG | I1213 20:01:40.884111   52299 retry.go:31] will retry after 311.023684ms: waiting for domain to come up
	I1213 20:01:41.196599   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:41.197084   52214 main.go:141] libmachine: (test-preload-089936) DBG | unable to find current IP address of domain test-preload-089936 in network mk-test-preload-089936
	I1213 20:01:41.197113   52214 main.go:141] libmachine: (test-preload-089936) DBG | I1213 20:01:41.197034   52299 retry.go:31] will retry after 348.867828ms: waiting for domain to come up
	I1213 20:01:41.547516   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:41.547933   52214 main.go:141] libmachine: (test-preload-089936) DBG | unable to find current IP address of domain test-preload-089936 in network mk-test-preload-089936
	I1213 20:01:41.547961   52214 main.go:141] libmachine: (test-preload-089936) DBG | I1213 20:01:41.547897   52299 retry.go:31] will retry after 571.525654ms: waiting for domain to come up
	I1213 20:01:42.120446   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:42.120829   52214 main.go:141] libmachine: (test-preload-089936) DBG | unable to find current IP address of domain test-preload-089936 in network mk-test-preload-089936
	I1213 20:01:42.120858   52214 main.go:141] libmachine: (test-preload-089936) DBG | I1213 20:01:42.120804   52299 retry.go:31] will retry after 610.833134ms: waiting for domain to come up
	I1213 20:01:42.733562   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:42.733878   52214 main.go:141] libmachine: (test-preload-089936) DBG | unable to find current IP address of domain test-preload-089936 in network mk-test-preload-089936
	I1213 20:01:42.733903   52214 main.go:141] libmachine: (test-preload-089936) DBG | I1213 20:01:42.733843   52299 retry.go:31] will retry after 600.405517ms: waiting for domain to come up
	I1213 20:01:43.335537   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:43.335934   52214 main.go:141] libmachine: (test-preload-089936) DBG | unable to find current IP address of domain test-preload-089936 in network mk-test-preload-089936
	I1213 20:01:43.335959   52214 main.go:141] libmachine: (test-preload-089936) DBG | I1213 20:01:43.335904   52299 retry.go:31] will retry after 1.129458333s: waiting for domain to come up
	I1213 20:01:44.466608   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:44.467094   52214 main.go:141] libmachine: (test-preload-089936) DBG | unable to find current IP address of domain test-preload-089936 in network mk-test-preload-089936
	I1213 20:01:44.467116   52214 main.go:141] libmachine: (test-preload-089936) DBG | I1213 20:01:44.467055   52299 retry.go:31] will retry after 1.002079233s: waiting for domain to come up
	I1213 20:01:45.471198   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:45.471658   52214 main.go:141] libmachine: (test-preload-089936) DBG | unable to find current IP address of domain test-preload-089936 in network mk-test-preload-089936
	I1213 20:01:45.471708   52214 main.go:141] libmachine: (test-preload-089936) DBG | I1213 20:01:45.471634   52299 retry.go:31] will retry after 1.1274279s: waiting for domain to come up
	I1213 20:01:46.600157   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:46.600526   52214 main.go:141] libmachine: (test-preload-089936) DBG | unable to find current IP address of domain test-preload-089936 in network mk-test-preload-089936
	I1213 20:01:46.600551   52214 main.go:141] libmachine: (test-preload-089936) DBG | I1213 20:01:46.600490   52299 retry.go:31] will retry after 2.164252179s: waiting for domain to come up
	I1213 20:01:48.767694   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:48.768176   52214 main.go:141] libmachine: (test-preload-089936) DBG | unable to find current IP address of domain test-preload-089936 in network mk-test-preload-089936
	I1213 20:01:48.768205   52214 main.go:141] libmachine: (test-preload-089936) DBG | I1213 20:01:48.768160   52299 retry.go:31] will retry after 1.866485602s: waiting for domain to come up
	I1213 20:01:50.636085   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:50.636490   52214 main.go:141] libmachine: (test-preload-089936) DBG | unable to find current IP address of domain test-preload-089936 in network mk-test-preload-089936
	I1213 20:01:50.636528   52214 main.go:141] libmachine: (test-preload-089936) DBG | I1213 20:01:50.636459   52299 retry.go:31] will retry after 2.982133046s: waiting for domain to come up
	I1213 20:01:53.622496   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:53.622853   52214 main.go:141] libmachine: (test-preload-089936) DBG | unable to find current IP address of domain test-preload-089936 in network mk-test-preload-089936
	I1213 20:01:53.622870   52214 main.go:141] libmachine: (test-preload-089936) DBG | I1213 20:01:53.622818   52299 retry.go:31] will retry after 3.115469931s: waiting for domain to come up
	I1213 20:01:56.739842   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:56.740305   52214 main.go:141] libmachine: (test-preload-089936) found domain IP: 192.168.39.50
	I1213 20:01:56.740336   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has current primary IP address 192.168.39.50 and MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:56.740350   52214 main.go:141] libmachine: (test-preload-089936) reserving static IP address...
	I1213 20:01:56.740745   52214 main.go:141] libmachine: (test-preload-089936) DBG | found host DHCP lease matching {name: "test-preload-089936", mac: "52:54:00:aa:a8:7c", ip: "192.168.39.50"} in network mk-test-preload-089936: {Iface:virbr1 ExpiryTime:2024-12-13 21:01:50 +0000 UTC Type:0 Mac:52:54:00:aa:a8:7c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:test-preload-089936 Clientid:01:52:54:00:aa:a8:7c}
	I1213 20:01:56.740769   52214 main.go:141] libmachine: (test-preload-089936) DBG | skip adding static IP to network mk-test-preload-089936 - found existing host DHCP lease matching {name: "test-preload-089936", mac: "52:54:00:aa:a8:7c", ip: "192.168.39.50"}
	I1213 20:01:56.740781   52214 main.go:141] libmachine: (test-preload-089936) reserved static IP address 192.168.39.50 for domain test-preload-089936
	I1213 20:01:56.740811   52214 main.go:141] libmachine: (test-preload-089936) DBG | Getting to WaitForSSH function...
	I1213 20:01:56.740829   52214 main.go:141] libmachine: (test-preload-089936) waiting for SSH...
	I1213 20:01:56.742979   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:56.743363   52214 main.go:141] libmachine: (test-preload-089936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:a8:7c", ip: ""} in network mk-test-preload-089936: {Iface:virbr1 ExpiryTime:2024-12-13 21:01:50 +0000 UTC Type:0 Mac:52:54:00:aa:a8:7c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:test-preload-089936 Clientid:01:52:54:00:aa:a8:7c}
	I1213 20:01:56.743383   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined IP address 192.168.39.50 and MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:56.743522   52214 main.go:141] libmachine: (test-preload-089936) DBG | Using SSH client type: external
	I1213 20:01:56.743549   52214 main.go:141] libmachine: (test-preload-089936) DBG | Using SSH private key: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/test-preload-089936/id_rsa (-rw-------)
	I1213 20:01:56.743569   52214 main.go:141] libmachine: (test-preload-089936) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20090-12353/.minikube/machines/test-preload-089936/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 20:01:56.743578   52214 main.go:141] libmachine: (test-preload-089936) DBG | About to run SSH command:
	I1213 20:01:56.743587   52214 main.go:141] libmachine: (test-preload-089936) DBG | exit 0
	I1213 20:01:56.870621   52214 main.go:141] libmachine: (test-preload-089936) DBG | SSH cmd err, output: <nil>: 
	I1213 20:01:56.871007   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetConfigRaw
	I1213 20:01:56.871638   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetIP
	I1213 20:01:56.873859   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:56.874230   52214 main.go:141] libmachine: (test-preload-089936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:a8:7c", ip: ""} in network mk-test-preload-089936: {Iface:virbr1 ExpiryTime:2024-12-13 21:01:50 +0000 UTC Type:0 Mac:52:54:00:aa:a8:7c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:test-preload-089936 Clientid:01:52:54:00:aa:a8:7c}
	I1213 20:01:56.874261   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined IP address 192.168.39.50 and MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:56.874526   52214 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/test-preload-089936/config.json ...
	I1213 20:01:56.874725   52214 machine.go:93] provisionDockerMachine start ...
	I1213 20:01:56.874742   52214 main.go:141] libmachine: (test-preload-089936) Calling .DriverName
	I1213 20:01:56.874926   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHHostname
	I1213 20:01:56.876859   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:56.877186   52214 main.go:141] libmachine: (test-preload-089936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:a8:7c", ip: ""} in network mk-test-preload-089936: {Iface:virbr1 ExpiryTime:2024-12-13 21:01:50 +0000 UTC Type:0 Mac:52:54:00:aa:a8:7c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:test-preload-089936 Clientid:01:52:54:00:aa:a8:7c}
	I1213 20:01:56.877210   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined IP address 192.168.39.50 and MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:56.877339   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHPort
	I1213 20:01:56.877498   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHKeyPath
	I1213 20:01:56.877689   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHKeyPath
	I1213 20:01:56.877809   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHUsername
	I1213 20:01:56.877945   52214 main.go:141] libmachine: Using SSH client type: native
	I1213 20:01:56.878161   52214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1213 20:01:56.878173   52214 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 20:01:56.982684   52214 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 20:01:56.982718   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetMachineName
	I1213 20:01:56.982968   52214 buildroot.go:166] provisioning hostname "test-preload-089936"
	I1213 20:01:56.982991   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetMachineName
	I1213 20:01:56.983172   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHHostname
	I1213 20:01:56.985568   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:56.985852   52214 main.go:141] libmachine: (test-preload-089936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:a8:7c", ip: ""} in network mk-test-preload-089936: {Iface:virbr1 ExpiryTime:2024-12-13 21:01:50 +0000 UTC Type:0 Mac:52:54:00:aa:a8:7c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:test-preload-089936 Clientid:01:52:54:00:aa:a8:7c}
	I1213 20:01:56.985874   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined IP address 192.168.39.50 and MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:56.986011   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHPort
	I1213 20:01:56.986168   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHKeyPath
	I1213 20:01:56.986321   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHKeyPath
	I1213 20:01:56.986459   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHUsername
	I1213 20:01:56.986603   52214 main.go:141] libmachine: Using SSH client type: native
	I1213 20:01:56.986766   52214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1213 20:01:56.986777   52214 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-089936 && echo "test-preload-089936" | sudo tee /etc/hostname
	I1213 20:01:57.111062   52214 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-089936
	
	I1213 20:01:57.111096   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHHostname
	I1213 20:01:57.113375   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.113686   52214 main.go:141] libmachine: (test-preload-089936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:a8:7c", ip: ""} in network mk-test-preload-089936: {Iface:virbr1 ExpiryTime:2024-12-13 21:01:50 +0000 UTC Type:0 Mac:52:54:00:aa:a8:7c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:test-preload-089936 Clientid:01:52:54:00:aa:a8:7c}
	I1213 20:01:57.113719   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined IP address 192.168.39.50 and MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.113835   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHPort
	I1213 20:01:57.113984   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHKeyPath
	I1213 20:01:57.114121   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHKeyPath
	I1213 20:01:57.114228   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHUsername
	I1213 20:01:57.114434   52214 main.go:141] libmachine: Using SSH client type: native
	I1213 20:01:57.114604   52214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1213 20:01:57.114618   52214 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-089936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-089936/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-089936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 20:01:57.226802   52214 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 20:01:57.226832   52214 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20090-12353/.minikube CaCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20090-12353/.minikube}
	I1213 20:01:57.226872   52214 buildroot.go:174] setting up certificates
	I1213 20:01:57.226881   52214 provision.go:84] configureAuth start
	I1213 20:01:57.226893   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetMachineName
	I1213 20:01:57.227171   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetIP
	I1213 20:01:57.229659   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.229973   52214 main.go:141] libmachine: (test-preload-089936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:a8:7c", ip: ""} in network mk-test-preload-089936: {Iface:virbr1 ExpiryTime:2024-12-13 21:01:50 +0000 UTC Type:0 Mac:52:54:00:aa:a8:7c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:test-preload-089936 Clientid:01:52:54:00:aa:a8:7c}
	I1213 20:01:57.230012   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined IP address 192.168.39.50 and MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.230096   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHHostname
	I1213 20:01:57.231964   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.232313   52214 main.go:141] libmachine: (test-preload-089936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:a8:7c", ip: ""} in network mk-test-preload-089936: {Iface:virbr1 ExpiryTime:2024-12-13 21:01:50 +0000 UTC Type:0 Mac:52:54:00:aa:a8:7c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:test-preload-089936 Clientid:01:52:54:00:aa:a8:7c}
	I1213 20:01:57.232346   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined IP address 192.168.39.50 and MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.232438   52214 provision.go:143] copyHostCerts
	I1213 20:01:57.232513   52214 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem, removing ...
	I1213 20:01:57.232531   52214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem
	I1213 20:01:57.232611   52214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem (1675 bytes)
	I1213 20:01:57.232715   52214 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem, removing ...
	I1213 20:01:57.232726   52214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem
	I1213 20:01:57.232765   52214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem (1082 bytes)
	I1213 20:01:57.232844   52214 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem, removing ...
	I1213 20:01:57.232857   52214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem
	I1213 20:01:57.232891   52214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem (1123 bytes)
	I1213 20:01:57.232965   52214 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem org=jenkins.test-preload-089936 san=[127.0.0.1 192.168.39.50 localhost minikube test-preload-089936]
	I1213 20:01:57.388585   52214 provision.go:177] copyRemoteCerts
	I1213 20:01:57.388635   52214 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 20:01:57.388658   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHHostname
	I1213 20:01:57.391203   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.391481   52214 main.go:141] libmachine: (test-preload-089936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:a8:7c", ip: ""} in network mk-test-preload-089936: {Iface:virbr1 ExpiryTime:2024-12-13 21:01:50 +0000 UTC Type:0 Mac:52:54:00:aa:a8:7c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:test-preload-089936 Clientid:01:52:54:00:aa:a8:7c}
	I1213 20:01:57.391509   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined IP address 192.168.39.50 and MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.391670   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHPort
	I1213 20:01:57.391840   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHKeyPath
	I1213 20:01:57.391993   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHUsername
	I1213 20:01:57.392119   52214 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/test-preload-089936/id_rsa Username:docker}
	I1213 20:01:57.476194   52214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 20:01:57.498372   52214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1213 20:01:57.519886   52214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 20:01:57.540798   52214 provision.go:87] duration metric: took 313.906065ms to configureAuth
	I1213 20:01:57.540826   52214 buildroot.go:189] setting minikube options for container-runtime
	I1213 20:01:57.541007   52214 config.go:182] Loaded profile config "test-preload-089936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1213 20:01:57.541103   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHHostname
	I1213 20:01:57.543787   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.544184   52214 main.go:141] libmachine: (test-preload-089936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:a8:7c", ip: ""} in network mk-test-preload-089936: {Iface:virbr1 ExpiryTime:2024-12-13 21:01:50 +0000 UTC Type:0 Mac:52:54:00:aa:a8:7c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:test-preload-089936 Clientid:01:52:54:00:aa:a8:7c}
	I1213 20:01:57.544209   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined IP address 192.168.39.50 and MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.544415   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHPort
	I1213 20:01:57.544589   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHKeyPath
	I1213 20:01:57.544736   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHKeyPath
	I1213 20:01:57.544852   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHUsername
	I1213 20:01:57.545008   52214 main.go:141] libmachine: Using SSH client type: native
	I1213 20:01:57.545192   52214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1213 20:01:57.545216   52214 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 20:01:57.751947   52214 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 20:01:57.751973   52214 machine.go:96] duration metric: took 877.234976ms to provisionDockerMachine
	I1213 20:01:57.752000   52214 start.go:293] postStartSetup for "test-preload-089936" (driver="kvm2")
	I1213 20:01:57.752013   52214 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 20:01:57.752036   52214 main.go:141] libmachine: (test-preload-089936) Calling .DriverName
	I1213 20:01:57.752350   52214 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 20:01:57.752386   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHHostname
	I1213 20:01:57.754933   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.755268   52214 main.go:141] libmachine: (test-preload-089936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:a8:7c", ip: ""} in network mk-test-preload-089936: {Iface:virbr1 ExpiryTime:2024-12-13 21:01:50 +0000 UTC Type:0 Mac:52:54:00:aa:a8:7c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:test-preload-089936 Clientid:01:52:54:00:aa:a8:7c}
	I1213 20:01:57.755303   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined IP address 192.168.39.50 and MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.755430   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHPort
	I1213 20:01:57.755595   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHKeyPath
	I1213 20:01:57.755740   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHUsername
	I1213 20:01:57.755846   52214 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/test-preload-089936/id_rsa Username:docker}
	I1213 20:01:57.836023   52214 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 20:01:57.839823   52214 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 20:01:57.839842   52214 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-12353/.minikube/addons for local assets ...
	I1213 20:01:57.839947   52214 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-12353/.minikube/files for local assets ...
	I1213 20:01:57.840022   52214 filesync.go:149] local asset: /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem -> 195442.pem in /etc/ssl/certs
	I1213 20:01:57.840110   52214 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 20:01:57.848372   52214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem --> /etc/ssl/certs/195442.pem (1708 bytes)
	I1213 20:01:57.869790   52214 start.go:296] duration metric: took 117.778236ms for postStartSetup
	I1213 20:01:57.869826   52214 fix.go:56] duration metric: took 18.440252556s for fixHost
	I1213 20:01:57.869846   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHHostname
	I1213 20:01:57.872385   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.872668   52214 main.go:141] libmachine: (test-preload-089936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:a8:7c", ip: ""} in network mk-test-preload-089936: {Iface:virbr1 ExpiryTime:2024-12-13 21:01:50 +0000 UTC Type:0 Mac:52:54:00:aa:a8:7c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:test-preload-089936 Clientid:01:52:54:00:aa:a8:7c}
	I1213 20:01:57.872709   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined IP address 192.168.39.50 and MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.872832   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHPort
	I1213 20:01:57.873007   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHKeyPath
	I1213 20:01:57.873160   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHKeyPath
	I1213 20:01:57.873271   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHUsername
	I1213 20:01:57.873390   52214 main.go:141] libmachine: Using SSH client type: native
	I1213 20:01:57.873551   52214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1213 20:01:57.873561   52214 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 20:01:57.978904   52214 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734120117.952564761
	
	I1213 20:01:57.978934   52214 fix.go:216] guest clock: 1734120117.952564761
	I1213 20:01:57.978941   52214 fix.go:229] Guest: 2024-12-13 20:01:57.952564761 +0000 UTC Remote: 2024-12-13 20:01:57.869829596 +0000 UTC m=+31.883233833 (delta=82.735165ms)
	I1213 20:01:57.978958   52214 fix.go:200] guest clock delta is within tolerance: 82.735165ms
	I1213 20:01:57.978962   52214 start.go:83] releasing machines lock for "test-preload-089936", held for 18.549400141s
	I1213 20:01:57.978979   52214 main.go:141] libmachine: (test-preload-089936) Calling .DriverName
	I1213 20:01:57.979216   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetIP
	I1213 20:01:57.981510   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.981825   52214 main.go:141] libmachine: (test-preload-089936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:a8:7c", ip: ""} in network mk-test-preload-089936: {Iface:virbr1 ExpiryTime:2024-12-13 21:01:50 +0000 UTC Type:0 Mac:52:54:00:aa:a8:7c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:test-preload-089936 Clientid:01:52:54:00:aa:a8:7c}
	I1213 20:01:57.981851   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined IP address 192.168.39.50 and MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.982008   52214 main.go:141] libmachine: (test-preload-089936) Calling .DriverName
	I1213 20:01:57.982464   52214 main.go:141] libmachine: (test-preload-089936) Calling .DriverName
	I1213 20:01:57.982624   52214 main.go:141] libmachine: (test-preload-089936) Calling .DriverName
	I1213 20:01:57.982735   52214 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 20:01:57.982772   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHHostname
	I1213 20:01:57.982802   52214 ssh_runner.go:195] Run: cat /version.json
	I1213 20:01:57.982827   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHHostname
	I1213 20:01:57.985182   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.985210   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.985591   52214 main.go:141] libmachine: (test-preload-089936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:a8:7c", ip: ""} in network mk-test-preload-089936: {Iface:virbr1 ExpiryTime:2024-12-13 21:01:50 +0000 UTC Type:0 Mac:52:54:00:aa:a8:7c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:test-preload-089936 Clientid:01:52:54:00:aa:a8:7c}
	I1213 20:01:57.985617   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined IP address 192.168.39.50 and MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.985642   52214 main.go:141] libmachine: (test-preload-089936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:a8:7c", ip: ""} in network mk-test-preload-089936: {Iface:virbr1 ExpiryTime:2024-12-13 21:01:50 +0000 UTC Type:0 Mac:52:54:00:aa:a8:7c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:test-preload-089936 Clientid:01:52:54:00:aa:a8:7c}
	I1213 20:01:57.985659   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined IP address 192.168.39.50 and MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:57.985830   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHPort
	I1213 20:01:57.985917   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHPort
	I1213 20:01:57.986009   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHKeyPath
	I1213 20:01:57.986079   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHKeyPath
	I1213 20:01:57.986139   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHUsername
	I1213 20:01:57.986193   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHUsername
	I1213 20:01:57.986241   52214 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/test-preload-089936/id_rsa Username:docker}
	I1213 20:01:57.986291   52214 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/test-preload-089936/id_rsa Username:docker}
	I1213 20:01:58.099931   52214 ssh_runner.go:195] Run: systemctl --version
	I1213 20:01:58.105627   52214 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 20:01:58.247989   52214 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 20:01:58.253366   52214 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 20:01:58.253427   52214 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 20:01:58.268759   52214 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 20:01:58.268777   52214 start.go:495] detecting cgroup driver to use...
	I1213 20:01:58.268839   52214 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 20:01:58.284709   52214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 20:01:58.297434   52214 docker.go:217] disabling cri-docker service (if available) ...
	I1213 20:01:58.297490   52214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 20:01:58.309866   52214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 20:01:58.322682   52214 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 20:01:58.430189   52214 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 20:01:58.556099   52214 docker.go:233] disabling docker service ...
	I1213 20:01:58.556211   52214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 20:01:58.570275   52214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 20:01:58.583050   52214 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 20:01:58.710735   52214 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 20:01:58.837713   52214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 20:01:58.850616   52214 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 20:01:58.867491   52214 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1213 20:01:58.867544   52214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:01:58.877220   52214 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 20:01:58.877287   52214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:01:58.886805   52214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:01:58.896236   52214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:01:58.905910   52214 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 20:01:58.915884   52214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:01:58.925716   52214 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:01:58.941751   52214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:01:58.951297   52214 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 20:01:58.959998   52214 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 20:01:58.960048   52214 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 20:01:58.972521   52214 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 20:01:58.980954   52214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:01:59.089899   52214 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 20:01:59.168513   52214 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 20:01:59.168594   52214 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 20:01:59.172965   52214 start.go:563] Will wait 60s for crictl version
	I1213 20:01:59.173016   52214 ssh_runner.go:195] Run: which crictl
	I1213 20:01:59.176408   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 20:01:59.213159   52214 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 20:01:59.213250   52214 ssh_runner.go:195] Run: crio --version
	I1213 20:01:59.240211   52214 ssh_runner.go:195] Run: crio --version
	I1213 20:01:59.268451   52214 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1213 20:01:59.269644   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetIP
	I1213 20:01:59.272469   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:59.272815   52214 main.go:141] libmachine: (test-preload-089936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:a8:7c", ip: ""} in network mk-test-preload-089936: {Iface:virbr1 ExpiryTime:2024-12-13 21:01:50 +0000 UTC Type:0 Mac:52:54:00:aa:a8:7c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:test-preload-089936 Clientid:01:52:54:00:aa:a8:7c}
	I1213 20:01:59.272837   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined IP address 192.168.39.50 and MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:01:59.273065   52214 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 20:01:59.276794   52214 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 20:01:59.288444   52214 kubeadm.go:883] updating cluster {Name:test-preload-089936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-089936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 20:01:59.288600   52214 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1213 20:01:59.288661   52214 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 20:01:59.321218   52214 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1213 20:01:59.321283   52214 ssh_runner.go:195] Run: which lz4
	I1213 20:01:59.324898   52214 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 20:01:59.328558   52214 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 20:01:59.328588   52214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1213 20:02:00.704861   52214 crio.go:462] duration metric: took 1.379986445s to copy over tarball
	I1213 20:02:00.704932   52214 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 20:02:03.000800   52214 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.295839174s)
	I1213 20:02:03.000824   52214 crio.go:469] duration metric: took 2.295938534s to extract the tarball
	I1213 20:02:03.000831   52214 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 20:02:03.040953   52214 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 20:02:03.080445   52214 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1213 20:02:03.080467   52214 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 20:02:03.080530   52214 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:02:03.080578   52214 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1213 20:02:03.080598   52214 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1213 20:02:03.080618   52214 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1213 20:02:03.080644   52214 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1213 20:02:03.080668   52214 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1213 20:02:03.080778   52214 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1213 20:02:03.080792   52214 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1213 20:02:03.082035   52214 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1213 20:02:03.082106   52214 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1213 20:02:03.082037   52214 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1213 20:02:03.082036   52214 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1213 20:02:03.082037   52214 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1213 20:02:03.082038   52214 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1213 20:02:03.082036   52214 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:02:03.082128   52214 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1213 20:02:03.321356   52214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1213 20:02:03.322528   52214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1213 20:02:03.323107   52214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1213 20:02:03.327037   52214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1213 20:02:03.341977   52214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1213 20:02:03.348644   52214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1213 20:02:03.382079   52214 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1213 20:02:03.382132   52214 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1213 20:02:03.382173   52214 ssh_runner.go:195] Run: which crictl
	I1213 20:02:03.437700   52214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1213 20:02:03.453522   52214 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1213 20:02:03.453575   52214 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1213 20:02:03.453627   52214 ssh_runner.go:195] Run: which crictl
	I1213 20:02:03.470736   52214 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1213 20:02:03.470778   52214 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1213 20:02:03.470806   52214 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1213 20:02:03.470778   52214 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1213 20:02:03.470854   52214 ssh_runner.go:195] Run: which crictl
	I1213 20:02:03.470894   52214 ssh_runner.go:195] Run: which crictl
	I1213 20:02:03.472721   52214 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1213 20:02:03.472749   52214 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1213 20:02:03.472803   52214 ssh_runner.go:195] Run: which crictl
	I1213 20:02:03.472820   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1213 20:02:03.472841   52214 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1213 20:02:03.472871   52214 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1213 20:02:03.472904   52214 ssh_runner.go:195] Run: which crictl
	I1213 20:02:03.502859   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1213 20:02:03.502938   52214 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1213 20:02:03.502972   52214 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1213 20:02:03.502995   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1213 20:02:03.503013   52214 ssh_runner.go:195] Run: which crictl
	I1213 20:02:03.503065   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1213 20:02:03.503122   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1213 20:02:03.503159   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1213 20:02:03.530549   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1213 20:02:03.644709   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1213 20:02:03.644709   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1213 20:02:03.644766   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1213 20:02:03.644853   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1213 20:02:03.644898   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1213 20:02:03.645043   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1213 20:02:03.748936   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1213 20:02:03.749059   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1213 20:02:03.783096   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1213 20:02:03.783159   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1213 20:02:03.783160   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1213 20:02:03.783162   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1213 20:02:03.783270   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1213 20:02:03.854653   52214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1213 20:02:03.854762   52214 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1213 20:02:03.854789   52214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1213 20:02:03.854892   52214 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1213 20:02:03.920074   52214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1213 20:02:03.920105   52214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1213 20:02:03.920129   52214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1213 20:02:03.920143   52214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1213 20:02:03.920184   52214 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1213 20:02:03.920202   52214 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1213 20:02:03.920208   52214 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1213 20:02:03.920210   52214 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1213 20:02:03.920186   52214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1213 20:02:03.920188   52214 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1213 20:02:03.920233   52214 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1213 20:02:03.920230   52214 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1213 20:02:03.920263   52214 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1213 20:02:03.969105   52214 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1213 20:02:03.969129   52214 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1213 20:02:03.969193   52214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1213 20:02:03.969273   52214 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1213 20:02:04.325626   52214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:02:07.495914   52214 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (3.575626553s)
	I1213 20:02:07.495948   52214 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1213 20:02:07.495975   52214 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1213 20:02:07.496014   52214 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: (3.575800391s)
	I1213 20:02:07.496055   52214 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1213 20:02:07.496024   52214 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1213 20:02:07.496065   52214 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.575802279s)
	I1213 20:02:07.496078   52214 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1213 20:02:07.496139   52214 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (3.526843741s)
	I1213 20:02:07.496151   52214 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1213 20:02:07.496168   52214 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.17051702s)
	I1213 20:02:07.844563   52214 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1213 20:02:07.844610   52214 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1213 20:02:07.844658   52214 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1213 20:02:07.986547   52214 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1213 20:02:07.986591   52214 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1213 20:02:07.986644   52214 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1213 20:02:10.129871   52214 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.143205189s)
	I1213 20:02:10.129907   52214 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1213 20:02:10.129936   52214 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1213 20:02:10.129993   52214 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1213 20:02:10.581926   52214 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1213 20:02:10.581966   52214 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1213 20:02:10.582036   52214 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1213 20:02:11.328561   52214 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1213 20:02:11.328610   52214 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1213 20:02:11.328670   52214 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1213 20:02:11.971899   52214 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1213 20:02:11.971932   52214 cache_images.go:123] Successfully loaded all cached images
	I1213 20:02:11.971937   52214 cache_images.go:92] duration metric: took 8.891449066s to LoadCachedImages
	I1213 20:02:11.971954   52214 kubeadm.go:934] updating node { 192.168.39.50 8443 v1.24.4 crio true true} ...
	I1213 20:02:11.972062   52214 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-089936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-089936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 20:02:11.972127   52214 ssh_runner.go:195] Run: crio config
	I1213 20:02:12.017504   52214 cni.go:84] Creating CNI manager for ""
	I1213 20:02:12.017524   52214 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:02:12.017538   52214 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1213 20:02:12.017555   52214 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.50 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-089936 NodeName:test-preload-089936 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 20:02:12.017675   52214 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-089936"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 20:02:12.017736   52214 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1213 20:02:12.027041   52214 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 20:02:12.027096   52214 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 20:02:12.035883   52214 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1213 20:02:12.050657   52214 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 20:02:12.065113   52214 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1213 20:02:12.080140   52214 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I1213 20:02:12.083423   52214 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 20:02:12.094363   52214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:02:12.203277   52214 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 20:02:12.218639   52214 certs.go:68] Setting up /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/test-preload-089936 for IP: 192.168.39.50
	I1213 20:02:12.218656   52214 certs.go:194] generating shared ca certs ...
	I1213 20:02:12.218670   52214 certs.go:226] acquiring lock for ca certs: {Name:mka8994129240986519f4b0ac41f1e4e27ada985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:02:12.218827   52214 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key
	I1213 20:02:12.218903   52214 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key
	I1213 20:02:12.218919   52214 certs.go:256] generating profile certs ...
	I1213 20:02:12.218995   52214 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/test-preload-089936/client.key
	I1213 20:02:12.219067   52214 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/test-preload-089936/apiserver.key.f5a83460
	I1213 20:02:12.219122   52214 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/test-preload-089936/proxy-client.key
	I1213 20:02:12.219240   52214 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544.pem (1338 bytes)
	W1213 20:02:12.219275   52214 certs.go:480] ignoring /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544_empty.pem, impossibly tiny 0 bytes
	I1213 20:02:12.219290   52214 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem (1679 bytes)
	I1213 20:02:12.219320   52214 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem (1082 bytes)
	I1213 20:02:12.219344   52214 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem (1123 bytes)
	I1213 20:02:12.219388   52214 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem (1675 bytes)
	I1213 20:02:12.219439   52214 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem (1708 bytes)
	I1213 20:02:12.220264   52214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 20:02:12.253871   52214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 20:02:12.285524   52214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 20:02:12.322364   52214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 20:02:12.356234   52214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/test-preload-089936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1213 20:02:12.390312   52214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/test-preload-089936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 20:02:12.423205   52214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/test-preload-089936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 20:02:12.444378   52214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/test-preload-089936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 20:02:12.465598   52214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544.pem --> /usr/share/ca-certificates/19544.pem (1338 bytes)
	I1213 20:02:12.486683   52214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem --> /usr/share/ca-certificates/195442.pem (1708 bytes)
	I1213 20:02:12.507581   52214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 20:02:12.528541   52214 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 20:02:12.543196   52214 ssh_runner.go:195] Run: openssl version
	I1213 20:02:12.548358   52214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19544.pem && ln -fs /usr/share/ca-certificates/19544.pem /etc/ssl/certs/19544.pem"
	I1213 20:02:12.558361   52214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19544.pem
	I1213 20:02:12.562194   52214 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 19:13 /usr/share/ca-certificates/19544.pem
	I1213 20:02:12.562249   52214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19544.pem
	I1213 20:02:12.567664   52214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19544.pem /etc/ssl/certs/51391683.0"
	I1213 20:02:12.577282   52214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/195442.pem && ln -fs /usr/share/ca-certificates/195442.pem /etc/ssl/certs/195442.pem"
	I1213 20:02:12.587190   52214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/195442.pem
	I1213 20:02:12.591083   52214 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 19:13 /usr/share/ca-certificates/195442.pem
	I1213 20:02:12.591131   52214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/195442.pem
	I1213 20:02:12.596252   52214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/195442.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 20:02:12.605795   52214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 20:02:12.615481   52214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:02:12.619346   52214 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:02:12.619401   52214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:02:12.624373   52214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 20:02:12.633877   52214 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 20:02:12.637635   52214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 20:02:12.642970   52214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 20:02:12.648131   52214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 20:02:12.653401   52214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 20:02:12.658630   52214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 20:02:12.663738   52214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 20:02:12.669018   52214 kubeadm.go:392] StartCluster: {Name:test-preload-089936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-089936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:02:12.669109   52214 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 20:02:12.669180   52214 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 20:02:12.704601   52214 cri.go:89] found id: ""
	I1213 20:02:12.704684   52214 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 20:02:12.713896   52214 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1213 20:02:12.713915   52214 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1213 20:02:12.713962   52214 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 20:02:12.722933   52214 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 20:02:12.723325   52214 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-089936" does not appear in /home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:02:12.723437   52214 kubeconfig.go:62] /home/jenkins/minikube-integration/20090-12353/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-089936" cluster setting kubeconfig missing "test-preload-089936" context setting]
	I1213 20:02:12.723728   52214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/kubeconfig: {Name:mkeeacf16d2513309766df13b67a96dd252bc4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:02:12.724288   52214 kapi.go:59] client config for test-preload-089936: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20090-12353/.minikube/profiles/test-preload-089936/client.crt", KeyFile:"/home/jenkins/minikube-integration/20090-12353/.minikube/profiles/test-preload-089936/client.key", CAFile:"/home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243da20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 20:02:12.724883   52214 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 20:02:12.733838   52214 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.50
	I1213 20:02:12.733863   52214 kubeadm.go:1160] stopping kube-system containers ...
	I1213 20:02:12.733876   52214 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 20:02:12.733929   52214 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 20:02:12.771766   52214 cri.go:89] found id: ""
	I1213 20:02:12.771845   52214 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 20:02:12.787529   52214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:02:12.796268   52214 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:02:12.796293   52214 kubeadm.go:157] found existing configuration files:
	
	I1213 20:02:12.796335   52214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:02:12.804654   52214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:02:12.804701   52214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:02:12.813133   52214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:02:12.821522   52214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:02:12.821568   52214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:02:12.830166   52214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:02:12.838322   52214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:02:12.838363   52214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:02:12.846561   52214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:02:12.854571   52214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:02:12.854613   52214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:02:12.863045   52214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 20:02:12.871970   52214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:02:12.964755   52214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:02:13.595615   52214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:02:13.846664   52214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:02:13.916973   52214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:02:13.998153   52214 api_server.go:52] waiting for apiserver process to appear ...
	I1213 20:02:13.998224   52214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:02:14.498519   52214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:02:14.998528   52214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:02:15.020548   52214 api_server.go:72] duration metric: took 1.02239421s to wait for apiserver process to appear ...
	I1213 20:02:15.020578   52214 api_server.go:88] waiting for apiserver healthz status ...
	I1213 20:02:15.020600   52214 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I1213 20:02:15.021105   52214 api_server.go:269] stopped: https://192.168.39.50:8443/healthz: Get "https://192.168.39.50:8443/healthz": dial tcp 192.168.39.50:8443: connect: connection refused
	I1213 20:02:15.520788   52214 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I1213 20:02:18.979922   52214 api_server.go:279] https://192.168.39.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 20:02:18.979946   52214 api_server.go:103] status: https://192.168.39.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 20:02:18.979959   52214 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I1213 20:02:19.004979   52214 api_server.go:279] https://192.168.39.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 20:02:19.005010   52214 api_server.go:103] status: https://192.168.39.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 20:02:19.021130   52214 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I1213 20:02:19.030162   52214 api_server.go:279] https://192.168.39.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 20:02:19.030201   52214 api_server.go:103] status: https://192.168.39.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 20:02:19.521530   52214 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I1213 20:02:19.526578   52214 api_server.go:279] https://192.168.39.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 20:02:19.526606   52214 api_server.go:103] status: https://192.168.39.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 20:02:20.020753   52214 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I1213 20:02:20.027455   52214 api_server.go:279] https://192.168.39.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 20:02:20.027486   52214 api_server.go:103] status: https://192.168.39.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 20:02:20.521083   52214 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I1213 20:02:20.526030   52214 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I1213 20:02:20.532164   52214 api_server.go:141] control plane version: v1.24.4
	I1213 20:02:20.532197   52214 api_server.go:131] duration metric: took 5.511610469s to wait for apiserver health ...
	I1213 20:02:20.532208   52214 cni.go:84] Creating CNI manager for ""
	I1213 20:02:20.532216   52214 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:02:20.533941   52214 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 20:02:20.535151   52214 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 20:02:20.544676   52214 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 20:02:20.590926   52214 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 20:02:20.591033   52214 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 20:02:20.591055   52214 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 20:02:20.603728   52214 system_pods.go:59] 7 kube-system pods found
	I1213 20:02:20.603761   52214 system_pods.go:61] "coredns-6d4b75cb6d-rt9fl" [10bd6a60-c4f2-493d-9721-409de7faf4a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 20:02:20.603768   52214 system_pods.go:61] "etcd-test-preload-089936" [726d0b09-37a5-4d01-a697-b6bc568ab272] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 20:02:20.603774   52214 system_pods.go:61] "kube-apiserver-test-preload-089936" [b7055c44-d96d-4197-9fc7-a31ebed1f791] Running
	I1213 20:02:20.603780   52214 system_pods.go:61] "kube-controller-manager-test-preload-089936" [4cda56e0-5270-4aff-9451-5277cf216113] Running
	I1213 20:02:20.603785   52214 system_pods.go:61] "kube-proxy-xbd8x" [97bfacb2-c90f-468b-b3ae-8ea4248ac233] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 20:02:20.603788   52214 system_pods.go:61] "kube-scheduler-test-preload-089936" [d3609fbb-7991-4c05-a411-67f04f5de32b] Running
	I1213 20:02:20.603793   52214 system_pods.go:61] "storage-provisioner" [cb9567fd-8276-49f0-b81c-a577b2193c5c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 20:02:20.603799   52214 system_pods.go:74] duration metric: took 12.85437ms to wait for pod list to return data ...
	I1213 20:02:20.603809   52214 node_conditions.go:102] verifying NodePressure condition ...
	I1213 20:02:20.606719   52214 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 20:02:20.606740   52214 node_conditions.go:123] node cpu capacity is 2
	I1213 20:02:20.606748   52214 node_conditions.go:105] duration metric: took 2.935889ms to run NodePressure ...
	I1213 20:02:20.606764   52214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:02:20.843073   52214 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1213 20:02:20.848074   52214 kubeadm.go:739] kubelet initialised
	I1213 20:02:20.848095   52214 kubeadm.go:740] duration metric: took 4.997338ms waiting for restarted kubelet to initialise ...
	I1213 20:02:20.848102   52214 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 20:02:20.853468   52214 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-rt9fl" in "kube-system" namespace to be "Ready" ...
	I1213 20:02:20.866894   52214 pod_ready.go:98] node "test-preload-089936" hosting pod "coredns-6d4b75cb6d-rt9fl" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-089936" has status "Ready":"False"
	I1213 20:02:20.866919   52214 pod_ready.go:82] duration metric: took 13.426881ms for pod "coredns-6d4b75cb6d-rt9fl" in "kube-system" namespace to be "Ready" ...
	E1213 20:02:20.866931   52214 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-089936" hosting pod "coredns-6d4b75cb6d-rt9fl" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-089936" has status "Ready":"False"
	I1213 20:02:20.866939   52214 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-089936" in "kube-system" namespace to be "Ready" ...
	I1213 20:02:20.874234   52214 pod_ready.go:98] node "test-preload-089936" hosting pod "etcd-test-preload-089936" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-089936" has status "Ready":"False"
	I1213 20:02:20.874264   52214 pod_ready.go:82] duration metric: took 7.311542ms for pod "etcd-test-preload-089936" in "kube-system" namespace to be "Ready" ...
	E1213 20:02:20.874275   52214 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-089936" hosting pod "etcd-test-preload-089936" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-089936" has status "Ready":"False"
	I1213 20:02:20.874283   52214 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-089936" in "kube-system" namespace to be "Ready" ...
	I1213 20:02:20.880420   52214 pod_ready.go:98] node "test-preload-089936" hosting pod "kube-apiserver-test-preload-089936" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-089936" has status "Ready":"False"
	I1213 20:02:20.880455   52214 pod_ready.go:82] duration metric: took 6.147646ms for pod "kube-apiserver-test-preload-089936" in "kube-system" namespace to be "Ready" ...
	E1213 20:02:20.880467   52214 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-089936" hosting pod "kube-apiserver-test-preload-089936" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-089936" has status "Ready":"False"
	I1213 20:02:20.880477   52214 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-089936" in "kube-system" namespace to be "Ready" ...
	I1213 20:02:20.996696   52214 pod_ready.go:98] node "test-preload-089936" hosting pod "kube-controller-manager-test-preload-089936" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-089936" has status "Ready":"False"
	I1213 20:02:20.996734   52214 pod_ready.go:82] duration metric: took 116.244079ms for pod "kube-controller-manager-test-preload-089936" in "kube-system" namespace to be "Ready" ...
	E1213 20:02:20.996748   52214 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-089936" hosting pod "kube-controller-manager-test-preload-089936" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-089936" has status "Ready":"False"
	I1213 20:02:20.996756   52214 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xbd8x" in "kube-system" namespace to be "Ready" ...
	I1213 20:02:21.395134   52214 pod_ready.go:98] node "test-preload-089936" hosting pod "kube-proxy-xbd8x" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-089936" has status "Ready":"False"
	I1213 20:02:21.395162   52214 pod_ready.go:82] duration metric: took 398.393991ms for pod "kube-proxy-xbd8x" in "kube-system" namespace to be "Ready" ...
	E1213 20:02:21.395174   52214 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-089936" hosting pod "kube-proxy-xbd8x" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-089936" has status "Ready":"False"
	I1213 20:02:21.395182   52214 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-089936" in "kube-system" namespace to be "Ready" ...
	I1213 20:02:21.793862   52214 pod_ready.go:98] node "test-preload-089936" hosting pod "kube-scheduler-test-preload-089936" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-089936" has status "Ready":"False"
	I1213 20:02:21.793886   52214 pod_ready.go:82] duration metric: took 398.697766ms for pod "kube-scheduler-test-preload-089936" in "kube-system" namespace to be "Ready" ...
	E1213 20:02:21.793895   52214 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-089936" hosting pod "kube-scheduler-test-preload-089936" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-089936" has status "Ready":"False"
	I1213 20:02:21.793902   52214 pod_ready.go:39] duration metric: took 945.786429ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 20:02:21.793922   52214 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 20:02:21.805467   52214 ops.go:34] apiserver oom_adj: -16
	I1213 20:02:21.805490   52214 kubeadm.go:597] duration metric: took 9.091566561s to restartPrimaryControlPlane
	I1213 20:02:21.805500   52214 kubeadm.go:394] duration metric: took 9.136486604s to StartCluster
	I1213 20:02:21.805517   52214 settings.go:142] acquiring lock: {Name:mkc90da34b53323b31b6e69f8fab5ad7b1bdb254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:02:21.805597   52214 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:02:21.806192   52214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/kubeconfig: {Name:mkeeacf16d2513309766df13b67a96dd252bc4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:02:21.806441   52214 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 20:02:21.806505   52214 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 20:02:21.806594   52214 addons.go:69] Setting storage-provisioner=true in profile "test-preload-089936"
	I1213 20:02:21.806609   52214 addons.go:234] Setting addon storage-provisioner=true in "test-preload-089936"
	W1213 20:02:21.806615   52214 addons.go:243] addon storage-provisioner should already be in state true
	I1213 20:02:21.806644   52214 host.go:66] Checking if "test-preload-089936" exists ...
	I1213 20:02:21.806642   52214 config.go:182] Loaded profile config "test-preload-089936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1213 20:02:21.806640   52214 addons.go:69] Setting default-storageclass=true in profile "test-preload-089936"
	I1213 20:02:21.806668   52214 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-089936"
	I1213 20:02:21.807019   52214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:02:21.807069   52214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:02:21.807116   52214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:02:21.807158   52214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:02:21.808024   52214 out.go:177] * Verifying Kubernetes components...
	I1213 20:02:21.809370   52214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:02:21.821552   52214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45929
	I1213 20:02:21.821597   52214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40593
	I1213 20:02:21.822012   52214 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:02:21.822055   52214 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:02:21.822535   52214 main.go:141] libmachine: Using API Version  1
	I1213 20:02:21.822538   52214 main.go:141] libmachine: Using API Version  1
	I1213 20:02:21.822567   52214 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:02:21.822550   52214 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:02:21.822921   52214 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:02:21.822919   52214 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:02:21.823106   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetState
	I1213 20:02:21.823412   52214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:02:21.823442   52214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:02:21.825570   52214 kapi.go:59] client config for test-preload-089936: &rest.Config{Host:"https://192.168.39.50:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20090-12353/.minikube/profiles/test-preload-089936/client.crt", KeyFile:"/home/jenkins/minikube-integration/20090-12353/.minikube/profiles/test-preload-089936/client.key", CAFile:"/home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243da20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 20:02:21.825885   52214 addons.go:234] Setting addon default-storageclass=true in "test-preload-089936"
	W1213 20:02:21.825906   52214 addons.go:243] addon default-storageclass should already be in state true
	I1213 20:02:21.825934   52214 host.go:66] Checking if "test-preload-089936" exists ...
	I1213 20:02:21.826327   52214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:02:21.826372   52214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:02:21.838940   52214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42685
	I1213 20:02:21.839506   52214 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:02:21.840021   52214 main.go:141] libmachine: Using API Version  1
	I1213 20:02:21.840054   52214 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:02:21.840420   52214 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:02:21.840606   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetState
	I1213 20:02:21.840725   52214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33549
	I1213 20:02:21.841141   52214 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:02:21.841656   52214 main.go:141] libmachine: Using API Version  1
	I1213 20:02:21.841678   52214 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:02:21.842017   52214 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:02:21.842312   52214 main.go:141] libmachine: (test-preload-089936) Calling .DriverName
	I1213 20:02:21.842591   52214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:02:21.842638   52214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:02:21.844185   52214 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:02:21.845610   52214 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:02:21.845624   52214 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 20:02:21.845637   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHHostname
	I1213 20:02:21.848909   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:02:21.849395   52214 main.go:141] libmachine: (test-preload-089936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:a8:7c", ip: ""} in network mk-test-preload-089936: {Iface:virbr1 ExpiryTime:2024-12-13 21:01:50 +0000 UTC Type:0 Mac:52:54:00:aa:a8:7c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:test-preload-089936 Clientid:01:52:54:00:aa:a8:7c}
	I1213 20:02:21.849419   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined IP address 192.168.39.50 and MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:02:21.849529   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHPort
	I1213 20:02:21.849683   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHKeyPath
	I1213 20:02:21.849840   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHUsername
	I1213 20:02:21.849953   52214 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/test-preload-089936/id_rsa Username:docker}
	I1213 20:02:21.869448   52214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42597
	I1213 20:02:21.869871   52214 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:02:21.870365   52214 main.go:141] libmachine: Using API Version  1
	I1213 20:02:21.870393   52214 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:02:21.870708   52214 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:02:21.870890   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetState
	I1213 20:02:21.872394   52214 main.go:141] libmachine: (test-preload-089936) Calling .DriverName
	I1213 20:02:21.872587   52214 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 20:02:21.872599   52214 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 20:02:21.872613   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHHostname
	I1213 20:02:21.875232   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:02:21.875614   52214 main.go:141] libmachine: (test-preload-089936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:a8:7c", ip: ""} in network mk-test-preload-089936: {Iface:virbr1 ExpiryTime:2024-12-13 21:01:50 +0000 UTC Type:0 Mac:52:54:00:aa:a8:7c Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:test-preload-089936 Clientid:01:52:54:00:aa:a8:7c}
	I1213 20:02:21.875632   52214 main.go:141] libmachine: (test-preload-089936) DBG | domain test-preload-089936 has defined IP address 192.168.39.50 and MAC address 52:54:00:aa:a8:7c in network mk-test-preload-089936
	I1213 20:02:21.875765   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHPort
	I1213 20:02:21.875929   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHKeyPath
	I1213 20:02:21.876061   52214 main.go:141] libmachine: (test-preload-089936) Calling .GetSSHUsername
	I1213 20:02:21.876194   52214 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/test-preload-089936/id_rsa Username:docker}
	I1213 20:02:21.970510   52214 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 20:02:21.985266   52214 node_ready.go:35] waiting up to 6m0s for node "test-preload-089936" to be "Ready" ...
	I1213 20:02:22.065644   52214 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:02:22.087279   52214 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 20:02:23.048694   52214 main.go:141] libmachine: Making call to close driver server
	I1213 20:02:23.048725   52214 main.go:141] libmachine: (test-preload-089936) Calling .Close
	I1213 20:02:23.048830   52214 main.go:141] libmachine: Making call to close driver server
	I1213 20:02:23.048852   52214 main.go:141] libmachine: (test-preload-089936) Calling .Close
	I1213 20:02:23.049015   52214 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:02:23.049033   52214 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:02:23.049048   52214 main.go:141] libmachine: Making call to close driver server
	I1213 20:02:23.049066   52214 main.go:141] libmachine: (test-preload-089936) Calling .Close
	I1213 20:02:23.049076   52214 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:02:23.049086   52214 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:02:23.049099   52214 main.go:141] libmachine: Making call to close driver server
	I1213 20:02:23.049109   52214 main.go:141] libmachine: (test-preload-089936) Calling .Close
	I1213 20:02:23.049331   52214 main.go:141] libmachine: (test-preload-089936) DBG | Closing plugin on server side
	I1213 20:02:23.049345   52214 main.go:141] libmachine: (test-preload-089936) DBG | Closing plugin on server side
	I1213 20:02:23.049364   52214 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:02:23.049366   52214 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:02:23.049373   52214 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:02:23.049381   52214 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:02:23.055708   52214 main.go:141] libmachine: Making call to close driver server
	I1213 20:02:23.055730   52214 main.go:141] libmachine: (test-preload-089936) Calling .Close
	I1213 20:02:23.055933   52214 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:02:23.055949   52214 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:02:23.055967   52214 main.go:141] libmachine: (test-preload-089936) DBG | Closing plugin on server side
	I1213 20:02:23.057860   52214 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1213 20:02:23.059174   52214 addons.go:510] duration metric: took 1.252676504s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 20:02:23.989372   52214 node_ready.go:53] node "test-preload-089936" has status "Ready":"False"
	I1213 20:02:25.989425   52214 node_ready.go:53] node "test-preload-089936" has status "Ready":"False"
	I1213 20:02:28.489804   52214 node_ready.go:53] node "test-preload-089936" has status "Ready":"False"
	I1213 20:02:29.489502   52214 node_ready.go:49] node "test-preload-089936" has status "Ready":"True"
	I1213 20:02:29.489529   52214 node_ready.go:38] duration metric: took 7.504234879s for node "test-preload-089936" to be "Ready" ...
	I1213 20:02:29.489542   52214 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 20:02:29.494687   52214 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-rt9fl" in "kube-system" namespace to be "Ready" ...
	I1213 20:02:29.499016   52214 pod_ready.go:93] pod "coredns-6d4b75cb6d-rt9fl" in "kube-system" namespace has status "Ready":"True"
	I1213 20:02:29.499036   52214 pod_ready.go:82] duration metric: took 4.320039ms for pod "coredns-6d4b75cb6d-rt9fl" in "kube-system" namespace to be "Ready" ...
	I1213 20:02:29.499047   52214 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-089936" in "kube-system" namespace to be "Ready" ...
	I1213 20:02:30.505686   52214 pod_ready.go:93] pod "etcd-test-preload-089936" in "kube-system" namespace has status "Ready":"True"
	I1213 20:02:30.505710   52214 pod_ready.go:82] duration metric: took 1.00665357s for pod "etcd-test-preload-089936" in "kube-system" namespace to be "Ready" ...
	I1213 20:02:30.505723   52214 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-089936" in "kube-system" namespace to be "Ready" ...
	I1213 20:02:30.510273   52214 pod_ready.go:93] pod "kube-apiserver-test-preload-089936" in "kube-system" namespace has status "Ready":"True"
	I1213 20:02:30.510296   52214 pod_ready.go:82] duration metric: took 4.564892ms for pod "kube-apiserver-test-preload-089936" in "kube-system" namespace to be "Ready" ...
	I1213 20:02:30.510308   52214 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-089936" in "kube-system" namespace to be "Ready" ...
	I1213 20:02:30.514570   52214 pod_ready.go:93] pod "kube-controller-manager-test-preload-089936" in "kube-system" namespace has status "Ready":"True"
	I1213 20:02:30.514594   52214 pod_ready.go:82] duration metric: took 4.278572ms for pod "kube-controller-manager-test-preload-089936" in "kube-system" namespace to be "Ready" ...
	I1213 20:02:30.514604   52214 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xbd8x" in "kube-system" namespace to be "Ready" ...
	I1213 20:02:30.689824   52214 pod_ready.go:93] pod "kube-proxy-xbd8x" in "kube-system" namespace has status "Ready":"True"
	I1213 20:02:30.689848   52214 pod_ready.go:82] duration metric: took 175.235489ms for pod "kube-proxy-xbd8x" in "kube-system" namespace to be "Ready" ...
	I1213 20:02:30.689860   52214 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-089936" in "kube-system" namespace to be "Ready" ...
	I1213 20:02:31.089259   52214 pod_ready.go:93] pod "kube-scheduler-test-preload-089936" in "kube-system" namespace has status "Ready":"True"
	I1213 20:02:31.089284   52214 pod_ready.go:82] duration metric: took 399.415473ms for pod "kube-scheduler-test-preload-089936" in "kube-system" namespace to be "Ready" ...
	I1213 20:02:31.089297   52214 pod_ready.go:39] duration metric: took 1.599743577s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 20:02:31.089316   52214 api_server.go:52] waiting for apiserver process to appear ...
	I1213 20:02:31.089378   52214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:02:31.103334   52214 api_server.go:72] duration metric: took 9.296854609s to wait for apiserver process to appear ...
	I1213 20:02:31.103360   52214 api_server.go:88] waiting for apiserver healthz status ...
	I1213 20:02:31.103377   52214 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I1213 20:02:31.109583   52214 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I1213 20:02:31.110535   52214 api_server.go:141] control plane version: v1.24.4
	I1213 20:02:31.110559   52214 api_server.go:131] duration metric: took 7.191108ms to wait for apiserver health ...
	I1213 20:02:31.110581   52214 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 20:02:31.293773   52214 system_pods.go:59] 7 kube-system pods found
	I1213 20:02:31.293802   52214 system_pods.go:61] "coredns-6d4b75cb6d-rt9fl" [10bd6a60-c4f2-493d-9721-409de7faf4a5] Running
	I1213 20:02:31.293807   52214 system_pods.go:61] "etcd-test-preload-089936" [726d0b09-37a5-4d01-a697-b6bc568ab272] Running
	I1213 20:02:31.293811   52214 system_pods.go:61] "kube-apiserver-test-preload-089936" [b7055c44-d96d-4197-9fc7-a31ebed1f791] Running
	I1213 20:02:31.293815   52214 system_pods.go:61] "kube-controller-manager-test-preload-089936" [4cda56e0-5270-4aff-9451-5277cf216113] Running
	I1213 20:02:31.293818   52214 system_pods.go:61] "kube-proxy-xbd8x" [97bfacb2-c90f-468b-b3ae-8ea4248ac233] Running
	I1213 20:02:31.293821   52214 system_pods.go:61] "kube-scheduler-test-preload-089936" [d3609fbb-7991-4c05-a411-67f04f5de32b] Running
	I1213 20:02:31.293826   52214 system_pods.go:61] "storage-provisioner" [cb9567fd-8276-49f0-b81c-a577b2193c5c] Running
	I1213 20:02:31.293832   52214 system_pods.go:74] duration metric: took 183.244583ms to wait for pod list to return data ...
	I1213 20:02:31.293844   52214 default_sa.go:34] waiting for default service account to be created ...
	I1213 20:02:31.490242   52214 default_sa.go:45] found service account: "default"
	I1213 20:02:31.490266   52214 default_sa.go:55] duration metric: took 196.416093ms for default service account to be created ...
	I1213 20:02:31.490277   52214 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 20:02:31.691132   52214 system_pods.go:86] 7 kube-system pods found
	I1213 20:02:31.691161   52214 system_pods.go:89] "coredns-6d4b75cb6d-rt9fl" [10bd6a60-c4f2-493d-9721-409de7faf4a5] Running
	I1213 20:02:31.691168   52214 system_pods.go:89] "etcd-test-preload-089936" [726d0b09-37a5-4d01-a697-b6bc568ab272] Running
	I1213 20:02:31.691174   52214 system_pods.go:89] "kube-apiserver-test-preload-089936" [b7055c44-d96d-4197-9fc7-a31ebed1f791] Running
	I1213 20:02:31.691180   52214 system_pods.go:89] "kube-controller-manager-test-preload-089936" [4cda56e0-5270-4aff-9451-5277cf216113] Running
	I1213 20:02:31.691185   52214 system_pods.go:89] "kube-proxy-xbd8x" [97bfacb2-c90f-468b-b3ae-8ea4248ac233] Running
	I1213 20:02:31.691190   52214 system_pods.go:89] "kube-scheduler-test-preload-089936" [d3609fbb-7991-4c05-a411-67f04f5de32b] Running
	I1213 20:02:31.691194   52214 system_pods.go:89] "storage-provisioner" [cb9567fd-8276-49f0-b81c-a577b2193c5c] Running
	I1213 20:02:31.691203   52214 system_pods.go:126] duration metric: took 200.919255ms to wait for k8s-apps to be running ...
	I1213 20:02:31.691218   52214 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 20:02:31.691267   52214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:02:31.705738   52214 system_svc.go:56] duration metric: took 14.518743ms WaitForService to wait for kubelet
	I1213 20:02:31.705767   52214 kubeadm.go:582] duration metric: took 9.899292823s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 20:02:31.705784   52214 node_conditions.go:102] verifying NodePressure condition ...
	I1213 20:02:31.889514   52214 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 20:02:31.889544   52214 node_conditions.go:123] node cpu capacity is 2
	I1213 20:02:31.889555   52214 node_conditions.go:105] duration metric: took 183.76637ms to run NodePressure ...
	I1213 20:02:31.889565   52214 start.go:241] waiting for startup goroutines ...
	I1213 20:02:31.889572   52214 start.go:246] waiting for cluster config update ...
	I1213 20:02:31.889581   52214 start.go:255] writing updated cluster config ...
	I1213 20:02:31.889849   52214 ssh_runner.go:195] Run: rm -f paused
	I1213 20:02:31.937565   52214 start.go:600] kubectl: 1.32.0, cluster: 1.24.4 (minor skew: 8)
	I1213 20:02:31.939542   52214 out.go:201] 
	W1213 20:02:31.940961   52214 out.go:270] ! /usr/local/bin/kubectl is version 1.32.0, which may have incompatibilities with Kubernetes 1.24.4.
	I1213 20:02:31.942155   52214 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1213 20:02:31.943289   52214 out.go:177] * Done! kubectl is now configured to use "test-preload-089936" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.825053904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734120152825028146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f58361e5-cac6-4214-8c59-90d8e8e029d4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.825636540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6016f60e-7670-4e05-836a-180c01f3388f name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.825745122Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6016f60e-7670-4e05-836a-180c01f3388f name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.825916824Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b37954e80917e7d1fddb9a53abb5ea0e91a156332aaff9b7106b4ab4ad8b900,PodSandboxId:bb527cabd4b0c4131ba9c572a913f3d961ab2375ae5a6c4182965cdc66db97ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1734120148174365460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-rt9fl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bd6a60-c4f2-493d-9721-409de7faf4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 759c33d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42ed19b055a863e7a3b46084bf9555f26e3440d9af9eced96b68e7a5afe06e00,PodSandboxId:7dbd710c72311f942fd8c6c201046053548636614c6cf2889f9282891d1b2eed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1734120140938992303,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xbd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 97bfacb2-c90f-468b-b3ae-8ea4248ac233,},Annotations:map[string]string{io.kubernetes.container.hash: 6f7dd80,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc66d5213bea961574b34d003ebfaef0d4dbeecb0636b0ca890eb0fa92b3b1b0,PodSandboxId:2a0516983bb7a301a69f8b02327349f8b9a6a30d1f36b2313f9f0f51f5510a55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734120140675540938,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb95
67fd-8276-49f0-b81c-a577b2193c5c,},Annotations:map[string]string{io.kubernetes.container.hash: 6daca0bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b67cfadfd6ba465db1517c6b77894ef6bbdba735e1a78c7ee213b6c88915e1c7,PodSandboxId:3e2f4a912bb83880936139dc862a6a48fbfdea5faf0373d7383b0e68cbaf3e77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1734120134673040045,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-089936,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 31e3e16fd068fdf111eab8f742a8104c,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b54f63c752efff60c569d90d1292e0e2a93dca9ff673a5a125eeff61de50036,PodSandboxId:b9dc7dd8e0b3d9b9a125171e45d3e20a459e1d87465f4619fba8368ef4eae773,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1734120134676639357,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-089936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3805441ef6210c5e8a699d517
e3cae8f,},Annotations:map[string]string{io.kubernetes.container.hash: 71cdc3a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea211035a3eb1589571ca362879902b8c85befe4b752b35940437cca9a2c051,PodSandboxId:753690cdac9e5d6c78bb7b51bdbbf794917a3f41d2a3580a6bcbe19f93491289,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1734120134635278778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-089936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0780125a7391f3b46384f7dc2340d9,},A
nnotations:map[string]string{io.kubernetes.container.hash: 9df0cd37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9513a9fa4e8b03a293985af357c61c6f56934b7387b94a8a17541e289a00bb8f,PodSandboxId:408af017efb16ce723f508f2834bc3406b739ec7ac634d8887e5ddfbcf4c9f6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1734120134648464589,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-089936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3567af957cae3676687e944fdbff0cc,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6016f60e-7670-4e05-836a-180c01f3388f name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.860565139Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa972437-d09f-4bcd-a491-a743bc752fc2 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.860644015Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa972437-d09f-4bcd-a491-a743bc752fc2 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.861983541Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7200c57b-c999-416c-b24a-9307cf8574ad name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.862440633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734120152862416347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7200c57b-c999-416c-b24a-9307cf8574ad name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.862926102Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57cb3c86-665b-4b71-b158-85c31b09cd66 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.862973653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57cb3c86-665b-4b71-b158-85c31b09cd66 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.863152367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b37954e80917e7d1fddb9a53abb5ea0e91a156332aaff9b7106b4ab4ad8b900,PodSandboxId:bb527cabd4b0c4131ba9c572a913f3d961ab2375ae5a6c4182965cdc66db97ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1734120148174365460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-rt9fl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bd6a60-c4f2-493d-9721-409de7faf4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 759c33d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42ed19b055a863e7a3b46084bf9555f26e3440d9af9eced96b68e7a5afe06e00,PodSandboxId:7dbd710c72311f942fd8c6c201046053548636614c6cf2889f9282891d1b2eed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1734120140938992303,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xbd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 97bfacb2-c90f-468b-b3ae-8ea4248ac233,},Annotations:map[string]string{io.kubernetes.container.hash: 6f7dd80,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc66d5213bea961574b34d003ebfaef0d4dbeecb0636b0ca890eb0fa92b3b1b0,PodSandboxId:2a0516983bb7a301a69f8b02327349f8b9a6a30d1f36b2313f9f0f51f5510a55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734120140675540938,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb95
67fd-8276-49f0-b81c-a577b2193c5c,},Annotations:map[string]string{io.kubernetes.container.hash: 6daca0bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b67cfadfd6ba465db1517c6b77894ef6bbdba735e1a78c7ee213b6c88915e1c7,PodSandboxId:3e2f4a912bb83880936139dc862a6a48fbfdea5faf0373d7383b0e68cbaf3e77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1734120134673040045,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-089936,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 31e3e16fd068fdf111eab8f742a8104c,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b54f63c752efff60c569d90d1292e0e2a93dca9ff673a5a125eeff61de50036,PodSandboxId:b9dc7dd8e0b3d9b9a125171e45d3e20a459e1d87465f4619fba8368ef4eae773,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1734120134676639357,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-089936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3805441ef6210c5e8a699d517
e3cae8f,},Annotations:map[string]string{io.kubernetes.container.hash: 71cdc3a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea211035a3eb1589571ca362879902b8c85befe4b752b35940437cca9a2c051,PodSandboxId:753690cdac9e5d6c78bb7b51bdbbf794917a3f41d2a3580a6bcbe19f93491289,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1734120134635278778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-089936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0780125a7391f3b46384f7dc2340d9,},A
nnotations:map[string]string{io.kubernetes.container.hash: 9df0cd37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9513a9fa4e8b03a293985af357c61c6f56934b7387b94a8a17541e289a00bb8f,PodSandboxId:408af017efb16ce723f508f2834bc3406b739ec7ac634d8887e5ddfbcf4c9f6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1734120134648464589,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-089936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3567af957cae3676687e944fdbff0cc,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=57cb3c86-665b-4b71-b158-85c31b09cd66 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.896124818Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c6e375b-fbbf-452b-90d3-db20505c4416 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.896202846Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c6e375b-fbbf-452b-90d3-db20505c4416 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.897110267Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15f19134-c183-4ba6-b2ce-a83aabc136ef name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.897770188Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734120152897745823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15f19134-c183-4ba6-b2ce-a83aabc136ef name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.898279216Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50637bdf-33ba-4fa6-b935-3bfc9102f404 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.898339980Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50637bdf-33ba-4fa6-b935-3bfc9102f404 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.898529645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b37954e80917e7d1fddb9a53abb5ea0e91a156332aaff9b7106b4ab4ad8b900,PodSandboxId:bb527cabd4b0c4131ba9c572a913f3d961ab2375ae5a6c4182965cdc66db97ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1734120148174365460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-rt9fl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bd6a60-c4f2-493d-9721-409de7faf4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 759c33d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42ed19b055a863e7a3b46084bf9555f26e3440d9af9eced96b68e7a5afe06e00,PodSandboxId:7dbd710c72311f942fd8c6c201046053548636614c6cf2889f9282891d1b2eed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1734120140938992303,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xbd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 97bfacb2-c90f-468b-b3ae-8ea4248ac233,},Annotations:map[string]string{io.kubernetes.container.hash: 6f7dd80,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc66d5213bea961574b34d003ebfaef0d4dbeecb0636b0ca890eb0fa92b3b1b0,PodSandboxId:2a0516983bb7a301a69f8b02327349f8b9a6a30d1f36b2313f9f0f51f5510a55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734120140675540938,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb95
67fd-8276-49f0-b81c-a577b2193c5c,},Annotations:map[string]string{io.kubernetes.container.hash: 6daca0bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b67cfadfd6ba465db1517c6b77894ef6bbdba735e1a78c7ee213b6c88915e1c7,PodSandboxId:3e2f4a912bb83880936139dc862a6a48fbfdea5faf0373d7383b0e68cbaf3e77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1734120134673040045,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-089936,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 31e3e16fd068fdf111eab8f742a8104c,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b54f63c752efff60c569d90d1292e0e2a93dca9ff673a5a125eeff61de50036,PodSandboxId:b9dc7dd8e0b3d9b9a125171e45d3e20a459e1d87465f4619fba8368ef4eae773,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1734120134676639357,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-089936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3805441ef6210c5e8a699d517
e3cae8f,},Annotations:map[string]string{io.kubernetes.container.hash: 71cdc3a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea211035a3eb1589571ca362879902b8c85befe4b752b35940437cca9a2c051,PodSandboxId:753690cdac9e5d6c78bb7b51bdbbf794917a3f41d2a3580a6bcbe19f93491289,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1734120134635278778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-089936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0780125a7391f3b46384f7dc2340d9,},A
nnotations:map[string]string{io.kubernetes.container.hash: 9df0cd37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9513a9fa4e8b03a293985af357c61c6f56934b7387b94a8a17541e289a00bb8f,PodSandboxId:408af017efb16ce723f508f2834bc3406b739ec7ac634d8887e5ddfbcf4c9f6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1734120134648464589,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-089936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3567af957cae3676687e944fdbff0cc,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50637bdf-33ba-4fa6-b935-3bfc9102f404 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.932134125Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3b9b874-7ac6-474c-bb2e-8308af069ad1 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.932243643Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3b9b874-7ac6-474c-bb2e-8308af069ad1 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.933874974Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9677ed0d-0901-41cc-9ae1-4f081cef2412 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.934527534Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734120152934489581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9677ed0d-0901-41cc-9ae1-4f081cef2412 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.935251142Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70e8491e-25b7-4c1f-94c5-44f536e6a071 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.935336942Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70e8491e-25b7-4c1f-94c5-44f536e6a071 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:02:32 test-preload-089936 crio[677]: time="2024-12-13 20:02:32.935598356Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b37954e80917e7d1fddb9a53abb5ea0e91a156332aaff9b7106b4ab4ad8b900,PodSandboxId:bb527cabd4b0c4131ba9c572a913f3d961ab2375ae5a6c4182965cdc66db97ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1734120148174365460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-rt9fl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10bd6a60-c4f2-493d-9721-409de7faf4a5,},Annotations:map[string]string{io.kubernetes.container.hash: 759c33d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42ed19b055a863e7a3b46084bf9555f26e3440d9af9eced96b68e7a5afe06e00,PodSandboxId:7dbd710c72311f942fd8c6c201046053548636614c6cf2889f9282891d1b2eed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1734120140938992303,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xbd8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 97bfacb2-c90f-468b-b3ae-8ea4248ac233,},Annotations:map[string]string{io.kubernetes.container.hash: 6f7dd80,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc66d5213bea961574b34d003ebfaef0d4dbeecb0636b0ca890eb0fa92b3b1b0,PodSandboxId:2a0516983bb7a301a69f8b02327349f8b9a6a30d1f36b2313f9f0f51f5510a55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734120140675540938,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb95
67fd-8276-49f0-b81c-a577b2193c5c,},Annotations:map[string]string{io.kubernetes.container.hash: 6daca0bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b67cfadfd6ba465db1517c6b77894ef6bbdba735e1a78c7ee213b6c88915e1c7,PodSandboxId:3e2f4a912bb83880936139dc862a6a48fbfdea5faf0373d7383b0e68cbaf3e77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1734120134673040045,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-089936,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 31e3e16fd068fdf111eab8f742a8104c,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b54f63c752efff60c569d90d1292e0e2a93dca9ff673a5a125eeff61de50036,PodSandboxId:b9dc7dd8e0b3d9b9a125171e45d3e20a459e1d87465f4619fba8368ef4eae773,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1734120134676639357,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-089936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3805441ef6210c5e8a699d517
e3cae8f,},Annotations:map[string]string{io.kubernetes.container.hash: 71cdc3a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea211035a3eb1589571ca362879902b8c85befe4b752b35940437cca9a2c051,PodSandboxId:753690cdac9e5d6c78bb7b51bdbbf794917a3f41d2a3580a6bcbe19f93491289,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1734120134635278778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-089936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0780125a7391f3b46384f7dc2340d9,},A
nnotations:map[string]string{io.kubernetes.container.hash: 9df0cd37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9513a9fa4e8b03a293985af357c61c6f56934b7387b94a8a17541e289a00bb8f,PodSandboxId:408af017efb16ce723f508f2834bc3406b739ec7ac634d8887e5ddfbcf4c9f6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1734120134648464589,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-089936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3567af957cae3676687e944fdbff0cc,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70e8491e-25b7-4c1f-94c5-44f536e6a071 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2b37954e80917       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   4 seconds ago       Running             coredns                   1                   bb527cabd4b0c       coredns-6d4b75cb6d-rt9fl
	42ed19b055a86       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   12 seconds ago      Running             kube-proxy                1                   7dbd710c72311       kube-proxy-xbd8x
	dc66d5213bea9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       1                   2a0516983bb7a       storage-provisioner
	3b54f63c752ef       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   18 seconds ago      Running             etcd                      1                   b9dc7dd8e0b3d       etcd-test-preload-089936
	b67cfadfd6ba4       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   18 seconds ago      Running             kube-controller-manager   1                   3e2f4a912bb83       kube-controller-manager-test-preload-089936
	9513a9fa4e8b0       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   18 seconds ago      Running             kube-scheduler            1                   408af017efb16       kube-scheduler-test-preload-089936
	7ea211035a3eb       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   18 seconds ago      Running             kube-apiserver            1                   753690cdac9e5       kube-apiserver-test-preload-089936
	
	
	==> coredns [2b37954e80917e7d1fddb9a53abb5ea0e91a156332aaff9b7106b4ab4ad8b900] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:48716 - 30561 "HINFO IN 1439101632822331553.7592077318830280470. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011507258s
	
	
	==> describe nodes <==
	Name:               test-preload-089936
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-089936
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956
	                    minikube.k8s.io/name=test-preload-089936
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_13T19_58_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Dec 2024 19:58:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-089936
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Dec 2024 20:02:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Dec 2024 20:02:29 +0000   Fri, 13 Dec 2024 19:58:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Dec 2024 20:02:29 +0000   Fri, 13 Dec 2024 19:58:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Dec 2024 20:02:29 +0000   Fri, 13 Dec 2024 19:58:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Dec 2024 20:02:29 +0000   Fri, 13 Dec 2024 20:02:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.50
	  Hostname:    test-preload-089936
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1b21cc9540ba47a38c21e268d321f1a6
	  System UUID:                1b21cc95-40ba-47a3-8c21-e268d321f1a6
	  Boot ID:                    b08f9c59-d4c2-4d03-85b9-721ac63ee775
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-rt9fl                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     3m24s
	  kube-system                 etcd-test-preload-089936                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m37s
	  kube-system                 kube-apiserver-test-preload-089936             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 kube-controller-manager-test-preload-089936    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 kube-proxy-xbd8x                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m24s
	  kube-system                 kube-scheduler-test-preload-089936             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 11s                    kube-proxy       
	  Normal  Starting                 3m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m45s (x4 over 3m45s)  kubelet          Node test-preload-089936 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m45s (x4 over 3m45s)  kubelet          Node test-preload-089936 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m45s (x3 over 3m45s)  kubelet          Node test-preload-089936 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m37s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m37s                  kubelet          Node test-preload-089936 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m37s                  kubelet          Node test-preload-089936 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m37s                  kubelet          Node test-preload-089936 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m27s                  kubelet          Node test-preload-089936 status is now: NodeReady
	  Normal  RegisteredNode           3m24s                  node-controller  Node test-preload-089936 event: Registered Node test-preload-089936 in Controller
	  Normal  Starting                 20s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)      kubelet          Node test-preload-089936 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)      kubelet          Node test-preload-089936 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)      kubelet          Node test-preload-089936 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                     node-controller  Node test-preload-089936 event: Registered Node test-preload-089936 in Controller
	
	
	==> dmesg <==
	[Dec13 20:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052644] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037172] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.829808] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.875361] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.565606] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.130872] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.060304] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053374] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.151826] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.139921] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.255550] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[Dec13 20:02] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.059075] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.578254] systemd-fstab-generator[1128]: Ignoring "noauto" option for root device
	[  +6.840381] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.251559] systemd-fstab-generator[1780]: Ignoring "noauto" option for root device
	[  +6.117349] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [3b54f63c752efff60c569d90d1292e0e2a93dca9ff673a5a125eeff61de50036] <==
	{"level":"info","ts":"2024-12-13T20:02:15.166Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"eb1de673f525aa4c","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-12-13T20:02:15.167Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-12-13T20:02:15.167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c switched to configuration voters=(16941950758946187852)"}
	{"level":"info","ts":"2024-12-13T20:02:15.168Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c4909210040256fc","local-member-id":"eb1de673f525aa4c","added-peer-id":"eb1de673f525aa4c","added-peer-peer-urls":["https://192.168.39.50:2380"]}
	{"level":"info","ts":"2024-12-13T20:02:15.168Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c4909210040256fc","local-member-id":"eb1de673f525aa4c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-13T20:02:15.168Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-13T20:02:15.174Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-13T20:02:15.176Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.50:2380"}
	{"level":"info","ts":"2024-12-13T20:02:15.176Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.50:2380"}
	{"level":"info","ts":"2024-12-13T20:02:15.176Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"eb1de673f525aa4c","initial-advertise-peer-urls":["https://192.168.39.50:2380"],"listen-peer-urls":["https://192.168.39.50:2380"],"advertise-client-urls":["https://192.168.39.50:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.50:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-13T20:02:15.177Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-13T20:02:16.635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-13T20:02:16.635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-13T20:02:16.635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c received MsgPreVoteResp from eb1de673f525aa4c at term 2"}
	{"level":"info","ts":"2024-12-13T20:02:16.635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c became candidate at term 3"}
	{"level":"info","ts":"2024-12-13T20:02:16.635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c received MsgVoteResp from eb1de673f525aa4c at term 3"}
	{"level":"info","ts":"2024-12-13T20:02:16.635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eb1de673f525aa4c became leader at term 3"}
	{"level":"info","ts":"2024-12-13T20:02:16.636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: eb1de673f525aa4c elected leader eb1de673f525aa4c at term 3"}
	{"level":"info","ts":"2024-12-13T20:02:16.636Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"eb1de673f525aa4c","local-member-attributes":"{Name:test-preload-089936 ClientURLs:[https://192.168.39.50:2379]}","request-path":"/0/members/eb1de673f525aa4c/attributes","cluster-id":"c4909210040256fc","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-13T20:02:16.636Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-13T20:02:16.638Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-13T20:02:16.639Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-13T20:02:16.640Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.50:2379"}
	{"level":"info","ts":"2024-12-13T20:02:16.644Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-13T20:02:16.644Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:02:33 up 0 min,  0 users,  load average: 1.38, 0.36, 0.12
	Linux test-preload-089936 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7ea211035a3eb1589571ca362879902b8c85befe4b752b35940437cca9a2c051] <==
	I1213 20:02:18.913613       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1213 20:02:18.913643       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1213 20:02:18.899636       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I1213 20:02:18.955425       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1213 20:02:18.965319       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1213 20:02:18.965347       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I1213 20:02:19.010241       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1213 20:02:19.013610       1 cache.go:39] Caches are synced for autoregister controller
	E1213 20:02:19.019241       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1213 20:02:19.056340       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 20:02:19.065746       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1213 20:02:19.097573       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1213 20:02:19.100550       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1213 20:02:19.101504       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1213 20:02:19.100648       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 20:02:19.586380       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1213 20:02:19.915779       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 20:02:20.726292       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1213 20:02:20.737178       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1213 20:02:20.805200       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1213 20:02:20.822363       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 20:02:20.828462       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 20:02:21.186448       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1213 20:02:32.105403       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 20:02:32.310191       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [b67cfadfd6ba465db1517c6b77894ef6bbdba735e1a78c7ee213b6c88915e1c7] <==
	I1213 20:02:32.118095       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1213 20:02:32.122783       1 shared_informer.go:262] Caches are synced for endpoint
	I1213 20:02:32.127780       1 shared_informer.go:262] Caches are synced for disruption
	I1213 20:02:32.127804       1 disruption.go:371] Sending events to api server.
	I1213 20:02:32.128906       1 shared_informer.go:262] Caches are synced for TTL
	I1213 20:02:32.147912       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1213 20:02:32.190789       1 shared_informer.go:262] Caches are synced for service account
	I1213 20:02:32.202128       1 shared_informer.go:262] Caches are synced for taint
	I1213 20:02:32.202294       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I1213 20:02:32.202559       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W1213 20:02:32.202558       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-089936. Assuming now as a timestamp.
	I1213 20:02:32.202984       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1213 20:02:32.203599       1 event.go:294] "Event occurred" object="test-preload-089936" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-089936 event: Registered Node test-preload-089936 in Controller"
	I1213 20:02:32.213829       1 shared_informer.go:262] Caches are synced for namespace
	I1213 20:02:32.232521       1 shared_informer.go:262] Caches are synced for attach detach
	I1213 20:02:32.235035       1 shared_informer.go:262] Caches are synced for stateful set
	I1213 20:02:32.251882       1 shared_informer.go:262] Caches are synced for persistent volume
	I1213 20:02:32.268644       1 shared_informer.go:262] Caches are synced for resource quota
	I1213 20:02:32.289643       1 shared_informer.go:262] Caches are synced for expand
	I1213 20:02:32.303757       1 shared_informer.go:262] Caches are synced for ephemeral
	I1213 20:02:32.306349       1 shared_informer.go:262] Caches are synced for PVC protection
	I1213 20:02:32.310347       1 shared_informer.go:262] Caches are synced for resource quota
	I1213 20:02:32.771322       1 shared_informer.go:262] Caches are synced for garbage collector
	I1213 20:02:32.803613       1 shared_informer.go:262] Caches are synced for garbage collector
	I1213 20:02:32.803649       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [42ed19b055a863e7a3b46084bf9555f26e3440d9af9eced96b68e7a5afe06e00] <==
	I1213 20:02:21.135673       1 node.go:163] Successfully retrieved node IP: 192.168.39.50
	I1213 20:02:21.135894       1 server_others.go:138] "Detected node IP" address="192.168.39.50"
	I1213 20:02:21.136014       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1213 20:02:21.174063       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1213 20:02:21.174131       1 server_others.go:206] "Using iptables Proxier"
	I1213 20:02:21.174817       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1213 20:02:21.175600       1 server.go:661] "Version info" version="v1.24.4"
	I1213 20:02:21.175770       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 20:02:21.177588       1 config.go:317] "Starting service config controller"
	I1213 20:02:21.177949       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1213 20:02:21.178005       1 config.go:226] "Starting endpoint slice config controller"
	I1213 20:02:21.178026       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1213 20:02:21.179227       1 config.go:444] "Starting node config controller"
	I1213 20:02:21.180531       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1213 20:02:21.279852       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1213 20:02:21.280634       1 shared_informer.go:262] Caches are synced for node config
	I1213 20:02:21.280153       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [9513a9fa4e8b03a293985af357c61c6f56934b7387b94a8a17541e289a00bb8f] <==
	I1213 20:02:16.175513       1 serving.go:348] Generated self-signed cert in-memory
	W1213 20:02:19.003291       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 20:02:19.003420       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 20:02:19.003449       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 20:02:19.003515       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 20:02:19.026639       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1213 20:02:19.026761       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 20:02:19.028598       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1213 20:02:19.034626       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 20:02:19.041674       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1213 20:02:19.038498       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1213 20:02:19.142987       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 13 20:02:19 test-preload-089936 kubelet[1135]: I1213 20:02:19.081927    1135 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-089936"
	Dec 13 20:02:19 test-preload-089936 kubelet[1135]: I1213 20:02:19.085405    1135 setters.go:532] "Node became not ready" node="test-preload-089936" condition={Type:Ready Status:False LastHeartbeatTime:2024-12-13 20:02:19.085290973 +0000 UTC m=+5.245719655 LastTransitionTime:2024-12-13 20:02:19.085290973 +0000 UTC m=+5.245719655 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Dec 13 20:02:19 test-preload-089936 kubelet[1135]: I1213 20:02:19.942851    1135 apiserver.go:52] "Watching apiserver"
	Dec 13 20:02:19 test-preload-089936 kubelet[1135]: I1213 20:02:19.946040    1135 topology_manager.go:200] "Topology Admit Handler"
	Dec 13 20:02:19 test-preload-089936 kubelet[1135]: I1213 20:02:19.946186    1135 topology_manager.go:200] "Topology Admit Handler"
	Dec 13 20:02:19 test-preload-089936 kubelet[1135]: I1213 20:02:19.946230    1135 topology_manager.go:200] "Topology Admit Handler"
	Dec 13 20:02:19 test-preload-089936 kubelet[1135]: E1213 20:02:19.947235    1135 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-rt9fl" podUID=10bd6a60-c4f2-493d-9721-409de7faf4a5
	Dec 13 20:02:20 test-preload-089936 kubelet[1135]: I1213 20:02:20.009513    1135 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cb9567fd-8276-49f0-b81c-a577b2193c5c-tmp\") pod \"storage-provisioner\" (UID: \"cb9567fd-8276-49f0-b81c-a577b2193c5c\") " pod="kube-system/storage-provisioner"
	Dec 13 20:02:20 test-preload-089936 kubelet[1135]: I1213 20:02:20.009793    1135 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zvgj\" (UniqueName: \"kubernetes.io/projected/cb9567fd-8276-49f0-b81c-a577b2193c5c-kube-api-access-8zvgj\") pod \"storage-provisioner\" (UID: \"cb9567fd-8276-49f0-b81c-a577b2193c5c\") " pod="kube-system/storage-provisioner"
	Dec 13 20:02:20 test-preload-089936 kubelet[1135]: I1213 20:02:20.009864    1135 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/97bfacb2-c90f-468b-b3ae-8ea4248ac233-kube-proxy\") pod \"kube-proxy-xbd8x\" (UID: \"97bfacb2-c90f-468b-b3ae-8ea4248ac233\") " pod="kube-system/kube-proxy-xbd8x"
	Dec 13 20:02:20 test-preload-089936 kubelet[1135]: I1213 20:02:20.009920    1135 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10bd6a60-c4f2-493d-9721-409de7faf4a5-config-volume\") pod \"coredns-6d4b75cb6d-rt9fl\" (UID: \"10bd6a60-c4f2-493d-9721-409de7faf4a5\") " pod="kube-system/coredns-6d4b75cb6d-rt9fl"
	Dec 13 20:02:20 test-preload-089936 kubelet[1135]: I1213 20:02:20.009980    1135 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtrmm\" (UniqueName: \"kubernetes.io/projected/97bfacb2-c90f-468b-b3ae-8ea4248ac233-kube-api-access-jtrmm\") pod \"kube-proxy-xbd8x\" (UID: \"97bfacb2-c90f-468b-b3ae-8ea4248ac233\") " pod="kube-system/kube-proxy-xbd8x"
	Dec 13 20:02:20 test-preload-089936 kubelet[1135]: I1213 20:02:20.010032    1135 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-447tc\" (UniqueName: \"kubernetes.io/projected/10bd6a60-c4f2-493d-9721-409de7faf4a5-kube-api-access-447tc\") pod \"coredns-6d4b75cb6d-rt9fl\" (UID: \"10bd6a60-c4f2-493d-9721-409de7faf4a5\") " pod="kube-system/coredns-6d4b75cb6d-rt9fl"
	Dec 13 20:02:20 test-preload-089936 kubelet[1135]: I1213 20:02:20.010078    1135 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97bfacb2-c90f-468b-b3ae-8ea4248ac233-xtables-lock\") pod \"kube-proxy-xbd8x\" (UID: \"97bfacb2-c90f-468b-b3ae-8ea4248ac233\") " pod="kube-system/kube-proxy-xbd8x"
	Dec 13 20:02:20 test-preload-089936 kubelet[1135]: I1213 20:02:20.010121    1135 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97bfacb2-c90f-468b-b3ae-8ea4248ac233-lib-modules\") pod \"kube-proxy-xbd8x\" (UID: \"97bfacb2-c90f-468b-b3ae-8ea4248ac233\") " pod="kube-system/kube-proxy-xbd8x"
	Dec 13 20:02:20 test-preload-089936 kubelet[1135]: I1213 20:02:20.010167    1135 reconciler.go:159] "Reconciler: start to sync state"
	Dec 13 20:02:20 test-preload-089936 kubelet[1135]: E1213 20:02:20.111994    1135 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 13 20:02:20 test-preload-089936 kubelet[1135]: E1213 20:02:20.112514    1135 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/10bd6a60-c4f2-493d-9721-409de7faf4a5-config-volume podName:10bd6a60-c4f2-493d-9721-409de7faf4a5 nodeName:}" failed. No retries permitted until 2024-12-13 20:02:20.612476765 +0000 UTC m=+6.772905450 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/10bd6a60-c4f2-493d-9721-409de7faf4a5-config-volume") pod "coredns-6d4b75cb6d-rt9fl" (UID: "10bd6a60-c4f2-493d-9721-409de7faf4a5") : object "kube-system"/"coredns" not registered
	Dec 13 20:02:20 test-preload-089936 kubelet[1135]: E1213 20:02:20.616311    1135 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 13 20:02:20 test-preload-089936 kubelet[1135]: E1213 20:02:20.616397    1135 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/10bd6a60-c4f2-493d-9721-409de7faf4a5-config-volume podName:10bd6a60-c4f2-493d-9721-409de7faf4a5 nodeName:}" failed. No retries permitted until 2024-12-13 20:02:21.616381665 +0000 UTC m=+7.776810336 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/10bd6a60-c4f2-493d-9721-409de7faf4a5-config-volume") pod "coredns-6d4b75cb6d-rt9fl" (UID: "10bd6a60-c4f2-493d-9721-409de7faf4a5") : object "kube-system"/"coredns" not registered
	Dec 13 20:02:21 test-preload-089936 kubelet[1135]: E1213 20:02:21.625790    1135 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 13 20:02:21 test-preload-089936 kubelet[1135]: E1213 20:02:21.625924    1135 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/10bd6a60-c4f2-493d-9721-409de7faf4a5-config-volume podName:10bd6a60-c4f2-493d-9721-409de7faf4a5 nodeName:}" failed. No retries permitted until 2024-12-13 20:02:23.625906889 +0000 UTC m=+9.786335570 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/10bd6a60-c4f2-493d-9721-409de7faf4a5-config-volume") pod "coredns-6d4b75cb6d-rt9fl" (UID: "10bd6a60-c4f2-493d-9721-409de7faf4a5") : object "kube-system"/"coredns" not registered
	Dec 13 20:02:22 test-preload-089936 kubelet[1135]: E1213 20:02:22.045941    1135 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-rt9fl" podUID=10bd6a60-c4f2-493d-9721-409de7faf4a5
	Dec 13 20:02:23 test-preload-089936 kubelet[1135]: E1213 20:02:23.640512    1135 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 13 20:02:23 test-preload-089936 kubelet[1135]: E1213 20:02:23.640622    1135 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/10bd6a60-c4f2-493d-9721-409de7faf4a5-config-volume podName:10bd6a60-c4f2-493d-9721-409de7faf4a5 nodeName:}" failed. No retries permitted until 2024-12-13 20:02:27.640603488 +0000 UTC m=+13.801032157 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/10bd6a60-c4f2-493d-9721-409de7faf4a5-config-volume") pod "coredns-6d4b75cb6d-rt9fl" (UID: "10bd6a60-c4f2-493d-9721-409de7faf4a5") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [dc66d5213bea961574b34d003ebfaef0d4dbeecb0636b0ca890eb0fa92b3b1b0] <==
	I1213 20:02:20.782176       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-089936 -n test-preload-089936
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-089936 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-089936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-089936
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-089936: (1.141769904s)
--- FAIL: TestPreload (288.57s)

                                                
                                    
x
+
TestKubernetesUpgrade (375.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-980370 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1213 20:05:41.726969   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-980370 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m53.904871067s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-980370] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-980370" primary control-plane node in "kubernetes-upgrade-980370" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 20:05:35.215795   56906 out.go:345] Setting OutFile to fd 1 ...
	I1213 20:05:35.215932   56906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:05:35.215944   56906 out.go:358] Setting ErrFile to fd 2...
	I1213 20:05:35.215951   56906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:05:35.216234   56906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
	I1213 20:05:35.216980   56906 out.go:352] Setting JSON to false
	I1213 20:05:35.218291   56906 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6478,"bootTime":1734113857,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 20:05:35.218419   56906 start.go:139] virtualization: kvm guest
	I1213 20:05:35.220803   56906 out.go:177] * [kubernetes-upgrade-980370] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 20:05:35.222057   56906 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 20:05:35.222103   56906 notify.go:220] Checking for updates...
	I1213 20:05:35.224487   56906 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 20:05:35.225792   56906 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:05:35.227232   56906 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 20:05:35.228459   56906 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 20:05:35.229464   56906 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 20:05:35.230904   56906 config.go:182] Loaded profile config "NoKubernetes-397374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:05:35.231019   56906 config.go:182] Loaded profile config "offline-crio-372192": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:05:35.231096   56906 config.go:182] Loaded profile config "running-upgrade-176442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1213 20:05:35.231170   56906 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 20:05:35.267830   56906 out.go:177] * Using the kvm2 driver based on user configuration
	I1213 20:05:35.269209   56906 start.go:297] selected driver: kvm2
	I1213 20:05:35.269230   56906 start.go:901] validating driver "kvm2" against <nil>
	I1213 20:05:35.269255   56906 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 20:05:35.270047   56906 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 20:05:35.270155   56906 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20090-12353/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 20:05:35.284777   56906 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1213 20:05:35.284824   56906 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 20:05:35.285074   56906 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 20:05:35.285099   56906 cni.go:84] Creating CNI manager for ""
	I1213 20:05:35.285149   56906 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:05:35.285160   56906 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 20:05:35.285208   56906 start.go:340] cluster config:
	{Name:kubernetes-upgrade-980370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-980370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:05:35.285332   56906 iso.go:125] acquiring lock: {Name:mkd84f6661a5214d8c2d3a40ad448351a88bfd1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 20:05:35.287097   56906 out.go:177] * Starting "kubernetes-upgrade-980370" primary control-plane node in "kubernetes-upgrade-980370" cluster
	I1213 20:05:35.288369   56906 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1213 20:05:35.288438   56906 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1213 20:05:35.288456   56906 cache.go:56] Caching tarball of preloaded images
	I1213 20:05:35.288566   56906 preload.go:172] Found /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 20:05:35.288581   56906 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1213 20:05:35.288704   56906 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/config.json ...
	I1213 20:05:35.288732   56906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/config.json: {Name:mk2accb4419ecee4cab16056e1f88781017fc245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:05:35.288900   56906 start.go:360] acquireMachinesLock for kubernetes-upgrade-980370: {Name:mkc278ae0927dbec7538ca4f7c13001e5f3abc49 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 20:06:00.387326   56906 start.go:364] duration metric: took 25.098378509s to acquireMachinesLock for "kubernetes-upgrade-980370"
	I1213 20:06:00.387425   56906 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-980370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-980370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 20:06:00.387530   56906 start.go:125] createHost starting for "" (driver="kvm2")
	I1213 20:06:00.389660   56906 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1213 20:06:00.389862   56906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:06:00.389920   56906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:06:00.409937   56906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38373
	I1213 20:06:00.410492   56906 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:06:00.411092   56906 main.go:141] libmachine: Using API Version  1
	I1213 20:06:00.411113   56906 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:06:00.411451   56906 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:06:00.411648   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetMachineName
	I1213 20:06:00.411788   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .DriverName
	I1213 20:06:00.411929   56906 start.go:159] libmachine.API.Create for "kubernetes-upgrade-980370" (driver="kvm2")
	I1213 20:06:00.411958   56906 client.go:168] LocalClient.Create starting
	I1213 20:06:00.412000   56906 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem
	I1213 20:06:00.412045   56906 main.go:141] libmachine: Decoding PEM data...
	I1213 20:06:00.412071   56906 main.go:141] libmachine: Parsing certificate...
	I1213 20:06:00.412143   56906 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem
	I1213 20:06:00.412171   56906 main.go:141] libmachine: Decoding PEM data...
	I1213 20:06:00.412191   56906 main.go:141] libmachine: Parsing certificate...
	I1213 20:06:00.412216   56906 main.go:141] libmachine: Running pre-create checks...
	I1213 20:06:00.412228   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .PreCreateCheck
	I1213 20:06:00.412551   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetConfigRaw
	I1213 20:06:00.412964   56906 main.go:141] libmachine: Creating machine...
	I1213 20:06:00.412981   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .Create
	I1213 20:06:00.413119   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) creating KVM machine...
	I1213 20:06:00.413139   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) creating network...
	I1213 20:06:00.414223   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found existing default KVM network
	I1213 20:06:00.415718   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | I1213 20:06:00.415558   57415 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002215e0}
	I1213 20:06:00.415744   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | created network xml: 
	I1213 20:06:00.415771   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | <network>
	I1213 20:06:00.415791   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG |   <name>mk-kubernetes-upgrade-980370</name>
	I1213 20:06:00.415802   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG |   <dns enable='no'/>
	I1213 20:06:00.415809   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG |   
	I1213 20:06:00.415825   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1213 20:06:00.415837   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG |     <dhcp>
	I1213 20:06:00.415871   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1213 20:06:00.415917   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG |     </dhcp>
	I1213 20:06:00.415931   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG |   </ip>
	I1213 20:06:00.415950   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG |   
	I1213 20:06:00.415962   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | </network>
	I1213 20:06:00.415972   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | 
	I1213 20:06:00.421172   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | trying to create private KVM network mk-kubernetes-upgrade-980370 192.168.39.0/24...
	I1213 20:06:00.493162   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | private KVM network mk-kubernetes-upgrade-980370 192.168.39.0/24 created
	I1213 20:06:00.493194   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | I1213 20:06:00.493130   57415 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 20:06:00.493208   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) setting up store path in /home/jenkins/minikube-integration/20090-12353/.minikube/machines/kubernetes-upgrade-980370 ...
	I1213 20:06:00.493226   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) building disk image from file:///home/jenkins/minikube-integration/20090-12353/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso
	I1213 20:06:00.493336   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Downloading /home/jenkins/minikube-integration/20090-12353/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20090-12353/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso...
	I1213 20:06:00.748885   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | I1213 20:06:00.748764   57415 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/kubernetes-upgrade-980370/id_rsa...
	I1213 20:06:00.959368   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | I1213 20:06:00.959243   57415 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/kubernetes-upgrade-980370/kubernetes-upgrade-980370.rawdisk...
	I1213 20:06:00.959409   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | Writing magic tar header
	I1213 20:06:00.959478   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | Writing SSH key tar header
	I1213 20:06:00.959512   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | I1213 20:06:00.959409   57415 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20090-12353/.minikube/machines/kubernetes-upgrade-980370 ...
	I1213 20:06:00.959531   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/kubernetes-upgrade-980370
	I1213 20:06:00.959572   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) setting executable bit set on /home/jenkins/minikube-integration/20090-12353/.minikube/machines/kubernetes-upgrade-980370 (perms=drwx------)
	I1213 20:06:00.959593   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20090-12353/.minikube/machines
	I1213 20:06:00.959607   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) setting executable bit set on /home/jenkins/minikube-integration/20090-12353/.minikube/machines (perms=drwxr-xr-x)
	I1213 20:06:00.959621   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 20:06:00.959642   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20090-12353
	I1213 20:06:00.959657   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1213 20:06:00.959667   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | checking permissions on dir: /home/jenkins
	I1213 20:06:00.959678   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | checking permissions on dir: /home
	I1213 20:06:00.959688   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) setting executable bit set on /home/jenkins/minikube-integration/20090-12353/.minikube (perms=drwxr-xr-x)
	I1213 20:06:00.959704   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) setting executable bit set on /home/jenkins/minikube-integration/20090-12353 (perms=drwxrwxr-x)
	I1213 20:06:00.959719   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1213 20:06:00.959737   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1213 20:06:00.959750   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | skipping /home - not owner
	I1213 20:06:00.959758   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) creating domain...
	I1213 20:06:00.960870   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) define libvirt domain using xml: 
	I1213 20:06:00.960899   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) <domain type='kvm'>
	I1213 20:06:00.960911   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)   <name>kubernetes-upgrade-980370</name>
	I1213 20:06:00.960924   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)   <memory unit='MiB'>2200</memory>
	I1213 20:06:00.960934   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)   <vcpu>2</vcpu>
	I1213 20:06:00.960944   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)   <features>
	I1213 20:06:00.960967   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     <acpi/>
	I1213 20:06:00.960977   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     <apic/>
	I1213 20:06:00.960990   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     <pae/>
	I1213 20:06:00.961008   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     
	I1213 20:06:00.961021   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)   </features>
	I1213 20:06:00.961030   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)   <cpu mode='host-passthrough'>
	I1213 20:06:00.961037   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)   
	I1213 20:06:00.961044   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)   </cpu>
	I1213 20:06:00.961050   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)   <os>
	I1213 20:06:00.961058   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     <type>hvm</type>
	I1213 20:06:00.961066   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     <boot dev='cdrom'/>
	I1213 20:06:00.961086   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     <boot dev='hd'/>
	I1213 20:06:00.961102   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     <bootmenu enable='no'/>
	I1213 20:06:00.961110   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)   </os>
	I1213 20:06:00.961118   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)   <devices>
	I1213 20:06:00.961129   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     <disk type='file' device='cdrom'>
	I1213 20:06:00.961143   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)       <source file='/home/jenkins/minikube-integration/20090-12353/.minikube/machines/kubernetes-upgrade-980370/boot2docker.iso'/>
	I1213 20:06:00.961152   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)       <target dev='hdc' bus='scsi'/>
	I1213 20:06:00.961163   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)       <readonly/>
	I1213 20:06:00.961171   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     </disk>
	I1213 20:06:00.961186   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     <disk type='file' device='disk'>
	I1213 20:06:00.961200   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1213 20:06:00.961225   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)       <source file='/home/jenkins/minikube-integration/20090-12353/.minikube/machines/kubernetes-upgrade-980370/kubernetes-upgrade-980370.rawdisk'/>
	I1213 20:06:00.961236   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)       <target dev='hda' bus='virtio'/>
	I1213 20:06:00.961246   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     </disk>
	I1213 20:06:00.961273   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     <interface type='network'>
	I1213 20:06:00.961295   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)       <source network='mk-kubernetes-upgrade-980370'/>
	I1213 20:06:00.961308   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)       <model type='virtio'/>
	I1213 20:06:00.961319   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     </interface>
	I1213 20:06:00.961341   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     <interface type='network'>
	I1213 20:06:00.961369   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)       <source network='default'/>
	I1213 20:06:00.961382   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)       <model type='virtio'/>
	I1213 20:06:00.961390   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     </interface>
	I1213 20:06:00.961403   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     <serial type='pty'>
	I1213 20:06:00.961410   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)       <target port='0'/>
	I1213 20:06:00.961415   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     </serial>
	I1213 20:06:00.961425   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     <console type='pty'>
	I1213 20:06:00.961435   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)       <target type='serial' port='0'/>
	I1213 20:06:00.961446   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     </console>
	I1213 20:06:00.961462   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     <rng model='virtio'>
	I1213 20:06:00.961479   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)       <backend model='random'>/dev/random</backend>
	I1213 20:06:00.961491   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     </rng>
	I1213 20:06:00.961499   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     
	I1213 20:06:00.961506   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)     
	I1213 20:06:00.961516   56906 main.go:141] libmachine: (kubernetes-upgrade-980370)   </devices>
	I1213 20:06:00.961523   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) </domain>
	I1213 20:06:00.961532   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) 
	I1213 20:06:00.964885   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:f4:87:7d in network default
	I1213 20:06:00.965437   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) starting domain...
	I1213 20:06:00.965456   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:00.965462   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) ensuring networks are active...
	I1213 20:06:00.966127   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Ensuring network default is active
	I1213 20:06:00.966415   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Ensuring network mk-kubernetes-upgrade-980370 is active
	I1213 20:06:00.966910   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) getting domain XML...
	I1213 20:06:00.967621   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) creating domain...
	I1213 20:06:02.204716   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) waiting for IP...
	I1213 20:06:02.205710   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:02.206245   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | unable to find current IP address of domain kubernetes-upgrade-980370 in network mk-kubernetes-upgrade-980370
	I1213 20:06:02.206302   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | I1213 20:06:02.206237   57415 retry.go:31] will retry after 246.890732ms: waiting for domain to come up
	I1213 20:06:02.455026   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:02.455566   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | unable to find current IP address of domain kubernetes-upgrade-980370 in network mk-kubernetes-upgrade-980370
	I1213 20:06:02.455594   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | I1213 20:06:02.455517   57415 retry.go:31] will retry after 260.645508ms: waiting for domain to come up
	I1213 20:06:02.717887   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:02.718415   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | unable to find current IP address of domain kubernetes-upgrade-980370 in network mk-kubernetes-upgrade-980370
	I1213 20:06:02.718439   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | I1213 20:06:02.718387   57415 retry.go:31] will retry after 440.895287ms: waiting for domain to come up
	I1213 20:06:03.161029   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:03.161513   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | unable to find current IP address of domain kubernetes-upgrade-980370 in network mk-kubernetes-upgrade-980370
	I1213 20:06:03.161544   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | I1213 20:06:03.161457   57415 retry.go:31] will retry after 464.179191ms: waiting for domain to come up
	I1213 20:06:03.627094   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:03.627561   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | unable to find current IP address of domain kubernetes-upgrade-980370 in network mk-kubernetes-upgrade-980370
	I1213 20:06:03.627592   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | I1213 20:06:03.627517   57415 retry.go:31] will retry after 724.976921ms: waiting for domain to come up
	I1213 20:06:04.354397   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:04.354828   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | unable to find current IP address of domain kubernetes-upgrade-980370 in network mk-kubernetes-upgrade-980370
	I1213 20:06:04.354874   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | I1213 20:06:04.354803   57415 retry.go:31] will retry after 757.970659ms: waiting for domain to come up
	I1213 20:06:05.115083   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:05.115682   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | unable to find current IP address of domain kubernetes-upgrade-980370 in network mk-kubernetes-upgrade-980370
	I1213 20:06:05.115714   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | I1213 20:06:05.115581   57415 retry.go:31] will retry after 782.445603ms: waiting for domain to come up
	I1213 20:06:05.900012   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:05.900591   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | unable to find current IP address of domain kubernetes-upgrade-980370 in network mk-kubernetes-upgrade-980370
	I1213 20:06:05.900628   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | I1213 20:06:05.900493   57415 retry.go:31] will retry after 991.947364ms: waiting for domain to come up
	I1213 20:06:06.893814   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:06.894353   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | unable to find current IP address of domain kubernetes-upgrade-980370 in network mk-kubernetes-upgrade-980370
	I1213 20:06:06.894380   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | I1213 20:06:06.894333   57415 retry.go:31] will retry after 1.619487966s: waiting for domain to come up
	I1213 20:06:08.516120   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:08.516585   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | unable to find current IP address of domain kubernetes-upgrade-980370 in network mk-kubernetes-upgrade-980370
	I1213 20:06:08.516616   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | I1213 20:06:08.516547   57415 retry.go:31] will retry after 2.192074219s: waiting for domain to come up
	I1213 20:06:10.710237   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:10.710699   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | unable to find current IP address of domain kubernetes-upgrade-980370 in network mk-kubernetes-upgrade-980370
	I1213 20:06:10.710736   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | I1213 20:06:10.710664   57415 retry.go:31] will retry after 2.351033468s: waiting for domain to come up
	I1213 20:06:13.064014   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:13.064473   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | unable to find current IP address of domain kubernetes-upgrade-980370 in network mk-kubernetes-upgrade-980370
	I1213 20:06:13.064494   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | I1213 20:06:13.064431   57415 retry.go:31] will retry after 2.472801503s: waiting for domain to come up
	I1213 20:06:15.538521   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:15.539125   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | unable to find current IP address of domain kubernetes-upgrade-980370 in network mk-kubernetes-upgrade-980370
	I1213 20:06:15.539161   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | I1213 20:06:15.539098   57415 retry.go:31] will retry after 3.680789217s: waiting for domain to come up
	I1213 20:06:19.223166   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:19.223792   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | unable to find current IP address of domain kubernetes-upgrade-980370 in network mk-kubernetes-upgrade-980370
	I1213 20:06:19.223820   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | I1213 20:06:19.223755   57415 retry.go:31] will retry after 3.577375766s: waiting for domain to come up
	I1213 20:06:22.803924   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:22.804544   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) found domain IP: 192.168.39.131
	I1213 20:06:22.804565   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) reserving static IP address...
	I1213 20:06:22.804603   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has current primary IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:22.804946   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-980370", mac: "52:54:00:b8:2c:2f", ip: "192.168.39.131"} in network mk-kubernetes-upgrade-980370
	I1213 20:06:22.884448   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) reserved static IP address 192.168.39.131 for domain kubernetes-upgrade-980370
	I1213 20:06:22.884477   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) waiting for SSH...
	I1213 20:06:22.884500   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | Getting to WaitForSSH function...
	I1213 20:06:22.887916   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:22.888370   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:2c:2f", ip: ""} in network mk-kubernetes-upgrade-980370: {Iface:virbr1 ExpiryTime:2024-12-13 21:06:15 +0000 UTC Type:0 Mac:52:54:00:b8:2c:2f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b8:2c:2f}
	I1213 20:06:22.888407   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:22.888586   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | Using SSH client type: external
	I1213 20:06:22.888621   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | Using SSH private key: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/kubernetes-upgrade-980370/id_rsa (-rw-------)
	I1213 20:06:22.888665   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.131 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20090-12353/.minikube/machines/kubernetes-upgrade-980370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 20:06:22.888685   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | About to run SSH command:
	I1213 20:06:22.888727   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | exit 0
	I1213 20:06:23.023499   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | SSH cmd err, output: <nil>: 
	I1213 20:06:23.023789   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) KVM machine creation complete
	I1213 20:06:23.024215   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetConfigRaw
	I1213 20:06:23.024866   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .DriverName
	I1213 20:06:23.025080   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .DriverName
	I1213 20:06:23.025276   56906 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1213 20:06:23.025295   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetState
	I1213 20:06:23.026545   56906 main.go:141] libmachine: Detecting operating system of created instance...
	I1213 20:06:23.026576   56906 main.go:141] libmachine: Waiting for SSH to be available...
	I1213 20:06:23.026588   56906 main.go:141] libmachine: Getting to WaitForSSH function...
	I1213 20:06:23.026599   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHHostname
	I1213 20:06:23.029372   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:23.029882   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:2c:2f", ip: ""} in network mk-kubernetes-upgrade-980370: {Iface:virbr1 ExpiryTime:2024-12-13 21:06:15 +0000 UTC Type:0 Mac:52:54:00:b8:2c:2f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:kubernetes-upgrade-980370 Clientid:01:52:54:00:b8:2c:2f}
	I1213 20:06:23.029918   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:23.030081   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHPort
	I1213 20:06:23.030271   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHKeyPath
	I1213 20:06:23.030406   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHKeyPath
	I1213 20:06:23.030575   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHUsername
	I1213 20:06:23.030775   56906 main.go:141] libmachine: Using SSH client type: native
	I1213 20:06:23.031033   56906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I1213 20:06:23.031046   56906 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1213 20:06:23.141869   56906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 20:06:23.141895   56906 main.go:141] libmachine: Detecting the provisioner...
	I1213 20:06:23.141902   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHHostname
	I1213 20:06:23.145239   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:23.145721   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:2c:2f", ip: ""} in network mk-kubernetes-upgrade-980370: {Iface:virbr1 ExpiryTime:2024-12-13 21:06:15 +0000 UTC Type:0 Mac:52:54:00:b8:2c:2f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:kubernetes-upgrade-980370 Clientid:01:52:54:00:b8:2c:2f}
	I1213 20:06:23.145754   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:23.145996   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHPort
	I1213 20:06:23.146193   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHKeyPath
	I1213 20:06:23.146380   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHKeyPath
	I1213 20:06:23.146564   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHUsername
	I1213 20:06:23.146717   56906 main.go:141] libmachine: Using SSH client type: native
	I1213 20:06:23.146925   56906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I1213 20:06:23.146940   56906 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1213 20:06:23.263535   56906 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1213 20:06:23.263614   56906 main.go:141] libmachine: found compatible host: buildroot
	I1213 20:06:23.263628   56906 main.go:141] libmachine: Provisioning with buildroot...
	I1213 20:06:23.263638   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetMachineName
	I1213 20:06:23.263879   56906 buildroot.go:166] provisioning hostname "kubernetes-upgrade-980370"
	I1213 20:06:23.263913   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetMachineName
	I1213 20:06:23.264083   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHHostname
	I1213 20:06:23.266726   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:23.267095   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:2c:2f", ip: ""} in network mk-kubernetes-upgrade-980370: {Iface:virbr1 ExpiryTime:2024-12-13 21:06:15 +0000 UTC Type:0 Mac:52:54:00:b8:2c:2f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:kubernetes-upgrade-980370 Clientid:01:52:54:00:b8:2c:2f}
	I1213 20:06:23.267125   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:23.267242   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHPort
	I1213 20:06:23.267429   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHKeyPath
	I1213 20:06:23.267597   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHKeyPath
	I1213 20:06:23.267730   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHUsername
	I1213 20:06:23.267896   56906 main.go:141] libmachine: Using SSH client type: native
	I1213 20:06:23.268131   56906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I1213 20:06:23.268150   56906 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-980370 && echo "kubernetes-upgrade-980370" | sudo tee /etc/hostname
	I1213 20:06:23.402133   56906 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-980370
	
	I1213 20:06:23.402168   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHHostname
	I1213 20:06:23.404797   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:23.405162   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:2c:2f", ip: ""} in network mk-kubernetes-upgrade-980370: {Iface:virbr1 ExpiryTime:2024-12-13 21:06:15 +0000 UTC Type:0 Mac:52:54:00:b8:2c:2f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:kubernetes-upgrade-980370 Clientid:01:52:54:00:b8:2c:2f}
	I1213 20:06:23.405190   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:23.405373   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHPort
	I1213 20:06:23.405523   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHKeyPath
	I1213 20:06:23.405698   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHKeyPath
	I1213 20:06:23.405829   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHUsername
	I1213 20:06:23.405980   56906 main.go:141] libmachine: Using SSH client type: native
	I1213 20:06:23.406138   56906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I1213 20:06:23.406156   56906 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-980370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-980370/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-980370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 20:06:23.527542   56906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 20:06:23.527575   56906 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20090-12353/.minikube CaCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20090-12353/.minikube}
	I1213 20:06:23.527613   56906 buildroot.go:174] setting up certificates
	I1213 20:06:23.527628   56906 provision.go:84] configureAuth start
	I1213 20:06:23.527647   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetMachineName
	I1213 20:06:23.527918   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetIP
	I1213 20:06:23.531533   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:23.531876   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:2c:2f", ip: ""} in network mk-kubernetes-upgrade-980370: {Iface:virbr1 ExpiryTime:2024-12-13 21:06:15 +0000 UTC Type:0 Mac:52:54:00:b8:2c:2f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:kubernetes-upgrade-980370 Clientid:01:52:54:00:b8:2c:2f}
	I1213 20:06:23.531917   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:23.532067   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHHostname
	I1213 20:06:23.534575   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:23.534940   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:2c:2f", ip: ""} in network mk-kubernetes-upgrade-980370: {Iface:virbr1 ExpiryTime:2024-12-13 21:06:15 +0000 UTC Type:0 Mac:52:54:00:b8:2c:2f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:kubernetes-upgrade-980370 Clientid:01:52:54:00:b8:2c:2f}
	I1213 20:06:23.534970   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:23.535117   56906 provision.go:143] copyHostCerts
	I1213 20:06:23.535207   56906 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem, removing ...
	I1213 20:06:23.535226   56906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem
	I1213 20:06:23.535294   56906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem (1082 bytes)
	I1213 20:06:23.535444   56906 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem, removing ...
	I1213 20:06:23.535454   56906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem
	I1213 20:06:23.535479   56906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem (1123 bytes)
	I1213 20:06:23.535563   56906 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem, removing ...
	I1213 20:06:23.535576   56906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem
	I1213 20:06:23.535611   56906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem (1675 bytes)
	I1213 20:06:23.535719   56906 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-980370 san=[127.0.0.1 192.168.39.131 kubernetes-upgrade-980370 localhost minikube]
	I1213 20:06:23.697512   56906 provision.go:177] copyRemoteCerts
	I1213 20:06:23.697570   56906 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 20:06:23.697598   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHHostname
	I1213 20:06:23.700448   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:23.700736   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:2c:2f", ip: ""} in network mk-kubernetes-upgrade-980370: {Iface:virbr1 ExpiryTime:2024-12-13 21:06:15 +0000 UTC Type:0 Mac:52:54:00:b8:2c:2f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:kubernetes-upgrade-980370 Clientid:01:52:54:00:b8:2c:2f}
	I1213 20:06:23.700760   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:23.700942   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHPort
	I1213 20:06:23.701108   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHKeyPath
	I1213 20:06:23.701286   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHUsername
	I1213 20:06:23.701428   56906 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/kubernetes-upgrade-980370/id_rsa Username:docker}
	I1213 20:06:23.785516   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 20:06:23.809534   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1213 20:06:23.830761   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 20:06:23.852249   56906 provision.go:87] duration metric: took 324.605664ms to configureAuth
	I1213 20:06:23.852273   56906 buildroot.go:189] setting minikube options for container-runtime
	I1213 20:06:23.852444   56906 config.go:182] Loaded profile config "kubernetes-upgrade-980370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1213 20:06:23.852525   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHHostname
	I1213 20:06:23.855124   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:23.855475   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:2c:2f", ip: ""} in network mk-kubernetes-upgrade-980370: {Iface:virbr1 ExpiryTime:2024-12-13 21:06:15 +0000 UTC Type:0 Mac:52:54:00:b8:2c:2f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:kubernetes-upgrade-980370 Clientid:01:52:54:00:b8:2c:2f}
	I1213 20:06:23.855505   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:23.855710   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHPort
	I1213 20:06:23.855892   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHKeyPath
	I1213 20:06:23.856044   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHKeyPath
	I1213 20:06:23.856159   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHUsername
	I1213 20:06:23.856307   56906 main.go:141] libmachine: Using SSH client type: native
	I1213 20:06:23.856494   56906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I1213 20:06:23.856525   56906 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 20:06:24.099844   56906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 20:06:24.099874   56906 main.go:141] libmachine: Checking connection to Docker...
	I1213 20:06:24.099882   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetURL
	I1213 20:06:24.101185   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | using libvirt version 6000000
	I1213 20:06:24.103596   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:24.103923   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:2c:2f", ip: ""} in network mk-kubernetes-upgrade-980370: {Iface:virbr1 ExpiryTime:2024-12-13 21:06:15 +0000 UTC Type:0 Mac:52:54:00:b8:2c:2f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:kubernetes-upgrade-980370 Clientid:01:52:54:00:b8:2c:2f}
	I1213 20:06:24.103955   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:24.104250   56906 main.go:141] libmachine: Docker is up and running!
	I1213 20:06:24.104267   56906 main.go:141] libmachine: Reticulating splines...
	I1213 20:06:24.104276   56906 client.go:171] duration metric: took 23.692306188s to LocalClient.Create
	I1213 20:06:24.104305   56906 start.go:167] duration metric: took 23.692376595s to libmachine.API.Create "kubernetes-upgrade-980370"
	I1213 20:06:24.104317   56906 start.go:293] postStartSetup for "kubernetes-upgrade-980370" (driver="kvm2")
	I1213 20:06:24.104330   56906 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 20:06:24.104366   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .DriverName
	I1213 20:06:24.104618   56906 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 20:06:24.104644   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHHostname
	I1213 20:06:24.106960   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:24.107315   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:2c:2f", ip: ""} in network mk-kubernetes-upgrade-980370: {Iface:virbr1 ExpiryTime:2024-12-13 21:06:15 +0000 UTC Type:0 Mac:52:54:00:b8:2c:2f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:kubernetes-upgrade-980370 Clientid:01:52:54:00:b8:2c:2f}
	I1213 20:06:24.107357   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:24.107435   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHPort
	I1213 20:06:24.107608   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHKeyPath
	I1213 20:06:24.107764   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHUsername
	I1213 20:06:24.107895   56906 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/kubernetes-upgrade-980370/id_rsa Username:docker}
	I1213 20:06:24.193134   56906 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 20:06:24.197491   56906 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 20:06:24.197518   56906 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-12353/.minikube/addons for local assets ...
	I1213 20:06:24.197584   56906 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-12353/.minikube/files for local assets ...
	I1213 20:06:24.197671   56906 filesync.go:149] local asset: /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem -> 195442.pem in /etc/ssl/certs
	I1213 20:06:24.197760   56906 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 20:06:24.206522   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem --> /etc/ssl/certs/195442.pem (1708 bytes)
	I1213 20:06:24.228775   56906 start.go:296] duration metric: took 124.443504ms for postStartSetup
	I1213 20:06:24.228831   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetConfigRaw
	I1213 20:06:24.229566   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetIP
	I1213 20:06:24.232541   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:24.232839   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:2c:2f", ip: ""} in network mk-kubernetes-upgrade-980370: {Iface:virbr1 ExpiryTime:2024-12-13 21:06:15 +0000 UTC Type:0 Mac:52:54:00:b8:2c:2f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:kubernetes-upgrade-980370 Clientid:01:52:54:00:b8:2c:2f}
	I1213 20:06:24.232866   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:24.233061   56906 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/config.json ...
	I1213 20:06:24.233278   56906 start.go:128] duration metric: took 23.845735127s to createHost
	I1213 20:06:24.233312   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHHostname
	I1213 20:06:24.235641   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:24.235943   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:2c:2f", ip: ""} in network mk-kubernetes-upgrade-980370: {Iface:virbr1 ExpiryTime:2024-12-13 21:06:15 +0000 UTC Type:0 Mac:52:54:00:b8:2c:2f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:kubernetes-upgrade-980370 Clientid:01:52:54:00:b8:2c:2f}
	I1213 20:06:24.235975   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:24.236120   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHPort
	I1213 20:06:24.236305   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHKeyPath
	I1213 20:06:24.236467   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHKeyPath
	I1213 20:06:24.236607   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHUsername
	I1213 20:06:24.236797   56906 main.go:141] libmachine: Using SSH client type: native
	I1213 20:06:24.236945   56906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I1213 20:06:24.236954   56906 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 20:06:24.351066   56906 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734120384.305057962
	
	I1213 20:06:24.351089   56906 fix.go:216] guest clock: 1734120384.305057962
	I1213 20:06:24.351095   56906 fix.go:229] Guest: 2024-12-13 20:06:24.305057962 +0000 UTC Remote: 2024-12-13 20:06:24.233291621 +0000 UTC m=+49.055261208 (delta=71.766341ms)
	I1213 20:06:24.351112   56906 fix.go:200] guest clock delta is within tolerance: 71.766341ms
	I1213 20:06:24.351117   56906 start.go:83] releasing machines lock for "kubernetes-upgrade-980370", held for 23.963749025s
	I1213 20:06:24.351136   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .DriverName
	I1213 20:06:24.351396   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetIP
	I1213 20:06:24.354096   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:24.354498   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:2c:2f", ip: ""} in network mk-kubernetes-upgrade-980370: {Iface:virbr1 ExpiryTime:2024-12-13 21:06:15 +0000 UTC Type:0 Mac:52:54:00:b8:2c:2f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:kubernetes-upgrade-980370 Clientid:01:52:54:00:b8:2c:2f}
	I1213 20:06:24.354525   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:24.354682   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .DriverName
	I1213 20:06:24.355129   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .DriverName
	I1213 20:06:24.355330   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .DriverName
	I1213 20:06:24.355413   56906 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 20:06:24.355459   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHHostname
	I1213 20:06:24.355541   56906 ssh_runner.go:195] Run: cat /version.json
	I1213 20:06:24.355564   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHHostname
	I1213 20:06:24.358272   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:24.358376   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:24.358775   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:2c:2f", ip: ""} in network mk-kubernetes-upgrade-980370: {Iface:virbr1 ExpiryTime:2024-12-13 21:06:15 +0000 UTC Type:0 Mac:52:54:00:b8:2c:2f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:kubernetes-upgrade-980370 Clientid:01:52:54:00:b8:2c:2f}
	I1213 20:06:24.358814   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:24.358860   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:2c:2f", ip: ""} in network mk-kubernetes-upgrade-980370: {Iface:virbr1 ExpiryTime:2024-12-13 21:06:15 +0000 UTC Type:0 Mac:52:54:00:b8:2c:2f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:kubernetes-upgrade-980370 Clientid:01:52:54:00:b8:2c:2f}
	I1213 20:06:24.358875   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:24.358960   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHPort
	I1213 20:06:24.359074   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHPort
	I1213 20:06:24.359163   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHKeyPath
	I1213 20:06:24.359232   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHKeyPath
	I1213 20:06:24.359411   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHUsername
	I1213 20:06:24.359479   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHUsername
	I1213 20:06:24.359556   56906 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/kubernetes-upgrade-980370/id_rsa Username:docker}
	I1213 20:06:24.359626   56906 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/kubernetes-upgrade-980370/id_rsa Username:docker}
	I1213 20:06:24.456946   56906 ssh_runner.go:195] Run: systemctl --version
	I1213 20:06:24.490270   56906 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 20:06:24.645893   56906 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 20:06:24.652659   56906 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 20:06:24.652739   56906 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 20:06:24.670190   56906 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 20:06:24.670221   56906 start.go:495] detecting cgroup driver to use...
	I1213 20:06:24.670305   56906 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 20:06:24.685679   56906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 20:06:24.698719   56906 docker.go:217] disabling cri-docker service (if available) ...
	I1213 20:06:24.698768   56906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 20:06:24.711738   56906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 20:06:24.724729   56906 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 20:06:24.848823   56906 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 20:06:25.013998   56906 docker.go:233] disabling docker service ...
	I1213 20:06:25.014071   56906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 20:06:25.032439   56906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 20:06:25.046478   56906 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 20:06:25.186021   56906 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 20:06:25.310384   56906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 20:06:25.325249   56906 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 20:06:25.347579   56906 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1213 20:06:25.347649   56906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:06:25.358938   56906 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 20:06:25.359019   56906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:06:25.373376   56906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:06:25.387751   56906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:06:25.401528   56906 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 20:06:25.418825   56906 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 20:06:25.429437   56906 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 20:06:25.429501   56906 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 20:06:25.447902   56906 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 20:06:25.461450   56906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:06:25.589872   56906 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 20:06:25.700641   56906 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 20:06:25.700702   56906 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 20:06:25.705722   56906 start.go:563] Will wait 60s for crictl version
	I1213 20:06:25.705776   56906 ssh_runner.go:195] Run: which crictl
	I1213 20:06:25.709549   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 20:06:25.756975   56906 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 20:06:25.757064   56906 ssh_runner.go:195] Run: crio --version
	I1213 20:06:25.801756   56906 ssh_runner.go:195] Run: crio --version
	I1213 20:06:25.842143   56906 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1213 20:06:25.843399   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetIP
	I1213 20:06:25.847215   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:25.847678   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:2c:2f", ip: ""} in network mk-kubernetes-upgrade-980370: {Iface:virbr1 ExpiryTime:2024-12-13 21:06:15 +0000 UTC Type:0 Mac:52:54:00:b8:2c:2f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:kubernetes-upgrade-980370 Clientid:01:52:54:00:b8:2c:2f}
	I1213 20:06:25.847709   56906 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:06:25.848840   56906 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 20:06:25.853670   56906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 20:06:25.868750   56906 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-980370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-980370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.131 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 20:06:25.868881   56906 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1213 20:06:25.868946   56906 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 20:06:25.900896   56906 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1213 20:06:25.900972   56906 ssh_runner.go:195] Run: which lz4
	I1213 20:06:25.905027   56906 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 20:06:25.909225   56906 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 20:06:25.909257   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1213 20:06:27.434146   56906 crio.go:462] duration metric: took 1.529139944s to copy over tarball
	I1213 20:06:27.434219   56906 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 20:06:30.075863   56906 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.641612192s)
	I1213 20:06:30.075892   56906 crio.go:469] duration metric: took 2.641717892s to extract the tarball
	I1213 20:06:30.075901   56906 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 20:06:30.118736   56906 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 20:06:30.167833   56906 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1213 20:06:30.167861   56906 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 20:06:30.168007   56906 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1213 20:06:30.168076   56906 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:06:30.167971   56906 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:06:30.167969   56906 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1213 20:06:30.168061   56906 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1213 20:06:30.168065   56906 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:06:30.168068   56906 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:06:30.167969   56906 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:06:30.170106   56906 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:06:30.170131   56906 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1213 20:06:30.170138   56906 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1213 20:06:30.170148   56906 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:06:30.170142   56906 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1213 20:06:30.170185   56906 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:06:30.170191   56906 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:06:30.170506   56906 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:06:30.393686   56906 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1213 20:06:30.430395   56906 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:06:30.433511   56906 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:06:30.436994   56906 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1213 20:06:30.437046   56906 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1213 20:06:30.437091   56906 ssh_runner.go:195] Run: which crictl
	I1213 20:06:30.443921   56906 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1213 20:06:30.471877   56906 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:06:30.481102   56906 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1213 20:06:30.488996   56906 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1213 20:06:30.489038   56906 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:06:30.489083   56906 ssh_runner.go:195] Run: which crictl
	I1213 20:06:30.499380   56906 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:06:30.515338   56906 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1213 20:06:30.515383   56906 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:06:30.515392   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1213 20:06:30.515423   56906 ssh_runner.go:195] Run: which crictl
	I1213 20:06:30.533949   56906 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1213 20:06:30.534057   56906 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1213 20:06:30.534123   56906 ssh_runner.go:195] Run: which crictl
	I1213 20:06:30.582935   56906 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1213 20:06:30.582982   56906 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:06:30.583048   56906 ssh_runner.go:195] Run: which crictl
	I1213 20:06:30.596622   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:06:30.596624   56906 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1213 20:06:30.596721   56906 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1213 20:06:30.596763   56906 ssh_runner.go:195] Run: which crictl
	I1213 20:06:30.611017   56906 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1213 20:06:30.611062   56906 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:06:30.611083   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1213 20:06:30.611118   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:06:30.611133   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1213 20:06:30.611091   56906 ssh_runner.go:195] Run: which crictl
	I1213 20:06:30.611184   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:06:30.714410   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1213 20:06:30.714603   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:06:30.742833   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1213 20:06:30.742973   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:06:30.743108   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:06:30.743131   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1213 20:06:30.743313   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:06:30.816781   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:06:30.817016   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1213 20:06:30.898683   56906 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1213 20:06:30.898859   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1213 20:06:30.905781   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:06:30.905878   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:06:30.907667   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:06:30.987357   56906 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1213 20:06:30.987469   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1213 20:06:30.987474   56906 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1213 20:06:31.029390   56906 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1213 20:06:31.029482   56906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:06:31.029405   56906 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1213 20:06:31.047766   56906 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1213 20:06:31.066590   56906 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1213 20:06:31.463901   56906 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:06:31.609546   56906 cache_images.go:92] duration metric: took 1.441663474s to LoadCachedImages
	W1213 20:06:31.609631   56906 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1213 20:06:31.609652   56906 kubeadm.go:934] updating node { 192.168.39.131 8443 v1.20.0 crio true true} ...
	I1213 20:06:31.609804   56906 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-980370 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.131
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-980370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 20:06:31.609884   56906 ssh_runner.go:195] Run: crio config
	I1213 20:06:31.657759   56906 cni.go:84] Creating CNI manager for ""
	I1213 20:06:31.657779   56906 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:06:31.657803   56906 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1213 20:06:31.657820   56906 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.131 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-980370 NodeName:kubernetes-upgrade-980370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.131"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.131 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1213 20:06:31.657953   56906 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.131
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-980370"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.131
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.131"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 20:06:31.658011   56906 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1213 20:06:31.668108   56906 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 20:06:31.668163   56906 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 20:06:31.678705   56906 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I1213 20:06:31.696332   56906 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 20:06:31.713639   56906 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1213 20:06:31.730889   56906 ssh_runner.go:195] Run: grep 192.168.39.131	control-plane.minikube.internal$ /etc/hosts
	I1213 20:06:31.734727   56906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.131	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 20:06:31.747535   56906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:06:31.867397   56906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 20:06:31.888654   56906 certs.go:68] Setting up /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370 for IP: 192.168.39.131
	I1213 20:06:31.888680   56906 certs.go:194] generating shared ca certs ...
	I1213 20:06:31.888701   56906 certs.go:226] acquiring lock for ca certs: {Name:mka8994129240986519f4b0ac41f1e4e27ada985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:06:31.888876   56906 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key
	I1213 20:06:31.888936   56906 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key
	I1213 20:06:31.888950   56906 certs.go:256] generating profile certs ...
	I1213 20:06:31.889029   56906 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/client.key
	I1213 20:06:31.889050   56906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/client.crt with IP's: []
	I1213 20:06:32.063952   56906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/client.crt ...
	I1213 20:06:32.063986   56906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/client.crt: {Name:mk519bb78bb2b6ca3e1fd8d9249c5f8f53cfe198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:06:32.064192   56906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/client.key ...
	I1213 20:06:32.064213   56906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/client.key: {Name:mk81b9541e25b329f8d2fc88b83d05b8102305d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:06:32.064359   56906 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/apiserver.key.d36e1b53
	I1213 20:06:32.064384   56906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/apiserver.crt.d36e1b53 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.131]
	I1213 20:06:32.339776   56906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/apiserver.crt.d36e1b53 ...
	I1213 20:06:32.339805   56906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/apiserver.crt.d36e1b53: {Name:mk195733e4f48296930484745a5370f60080a9bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:06:32.433978   56906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/apiserver.key.d36e1b53 ...
	I1213 20:06:32.434027   56906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/apiserver.key.d36e1b53: {Name:mk8b9b7b9a4f686017575dd12ca804a5d01fa712 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:06:32.434180   56906 certs.go:381] copying /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/apiserver.crt.d36e1b53 -> /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/apiserver.crt
	I1213 20:06:32.434255   56906 certs.go:385] copying /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/apiserver.key.d36e1b53 -> /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/apiserver.key
	I1213 20:06:32.434338   56906 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/proxy-client.key
	I1213 20:06:32.434355   56906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/proxy-client.crt with IP's: []
	I1213 20:06:32.773489   56906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/proxy-client.crt ...
	I1213 20:06:32.773521   56906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/proxy-client.crt: {Name:mk9ee6f11e8092e66f8313a498f13a2f27a06cbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:06:32.773698   56906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/proxy-client.key ...
	I1213 20:06:32.773712   56906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/proxy-client.key: {Name:mk73bfc0dbda3d060cfc477486b7cf6d616162ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:06:32.773875   56906 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544.pem (1338 bytes)
	W1213 20:06:32.773914   56906 certs.go:480] ignoring /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544_empty.pem, impossibly tiny 0 bytes
	I1213 20:06:32.773925   56906 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem (1679 bytes)
	I1213 20:06:32.773946   56906 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem (1082 bytes)
	I1213 20:06:32.773968   56906 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem (1123 bytes)
	I1213 20:06:32.773989   56906 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem (1675 bytes)
	I1213 20:06:32.774033   56906 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem (1708 bytes)
	I1213 20:06:32.774568   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 20:06:32.805186   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 20:06:32.829201   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 20:06:32.859260   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 20:06:32.893795   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1213 20:06:32.928755   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 20:06:32.953237   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 20:06:32.978417   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 20:06:33.003645   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544.pem --> /usr/share/ca-certificates/19544.pem (1338 bytes)
	I1213 20:06:33.028708   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem --> /usr/share/ca-certificates/195442.pem (1708 bytes)
	I1213 20:06:33.053208   56906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 20:06:33.077374   56906 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 20:06:33.093529   56906 ssh_runner.go:195] Run: openssl version
	I1213 20:06:33.099452   56906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/195442.pem && ln -fs /usr/share/ca-certificates/195442.pem /etc/ssl/certs/195442.pem"
	I1213 20:06:33.111929   56906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/195442.pem
	I1213 20:06:33.116490   56906 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 19:13 /usr/share/ca-certificates/195442.pem
	I1213 20:06:33.116547   56906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/195442.pem
	I1213 20:06:33.122458   56906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/195442.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 20:06:33.134648   56906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 20:06:33.149791   56906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:06:33.155699   56906 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:06:33.155759   56906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:06:33.162033   56906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 20:06:33.173511   56906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19544.pem && ln -fs /usr/share/ca-certificates/19544.pem /etc/ssl/certs/19544.pem"
	I1213 20:06:33.185365   56906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19544.pem
	I1213 20:06:33.189745   56906 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 19:13 /usr/share/ca-certificates/19544.pem
	I1213 20:06:33.189807   56906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19544.pem
	I1213 20:06:33.195684   56906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19544.pem /etc/ssl/certs/51391683.0"
	I1213 20:06:33.206224   56906 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 20:06:33.210261   56906 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 20:06:33.210329   56906 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-980370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-980370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.131 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:06:33.210446   56906 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 20:06:33.210497   56906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 20:06:33.247762   56906 cri.go:89] found id: ""
	I1213 20:06:33.247842   56906 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 20:06:33.258951   56906 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 20:06:33.269517   56906 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:06:33.279926   56906 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:06:33.279952   56906 kubeadm.go:157] found existing configuration files:
	
	I1213 20:06:33.280004   56906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:06:33.289644   56906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:06:33.289696   56906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:06:33.299853   56906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:06:33.310197   56906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:06:33.310253   56906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:06:33.319744   56906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:06:33.329379   56906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:06:33.329430   56906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:06:33.338880   56906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:06:33.348215   56906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:06:33.348290   56906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:06:33.357613   56906 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 20:06:33.479307   56906 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1213 20:06:33.479449   56906 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 20:06:33.635748   56906 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 20:06:33.635872   56906 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 20:06:33.635974   56906 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 20:06:33.886245   56906 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 20:06:33.905512   56906 out.go:235]   - Generating certificates and keys ...
	I1213 20:06:33.905635   56906 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 20:06:33.905732   56906 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 20:06:34.164291   56906 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 20:06:34.290206   56906 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1213 20:06:34.494523   56906 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1213 20:06:34.662650   56906 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1213 20:06:34.810479   56906 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1213 20:06:34.810679   56906 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-980370 localhost] and IPs [192.168.39.131 127.0.0.1 ::1]
	I1213 20:06:35.013617   56906 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1213 20:06:35.013893   56906 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-980370 localhost] and IPs [192.168.39.131 127.0.0.1 ::1]
	I1213 20:06:35.159323   56906 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 20:06:35.294050   56906 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 20:06:35.530262   56906 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1213 20:06:35.530509   56906 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 20:06:35.793102   56906 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 20:06:35.946952   56906 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 20:06:36.011600   56906 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 20:06:36.174829   56906 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 20:06:36.200893   56906 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 20:06:36.201516   56906 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 20:06:36.201582   56906 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 20:06:36.343927   56906 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 20:06:36.345578   56906 out.go:235]   - Booting up control plane ...
	I1213 20:06:36.345705   56906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 20:06:36.359035   56906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 20:06:36.361799   56906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 20:06:36.363911   56906 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 20:06:36.368603   56906 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 20:07:16.332205   56906 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1213 20:07:16.332624   56906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:07:16.332922   56906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:07:21.331594   56906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:07:21.331900   56906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:07:31.330097   56906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:07:31.330423   56906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:07:51.329875   56906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:07:51.330130   56906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:08:31.328943   56906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:08:31.329232   56906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:08:31.329254   56906 kubeadm.go:310] 
	I1213 20:08:31.329317   56906 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1213 20:08:31.329378   56906 kubeadm.go:310] 		timed out waiting for the condition
	I1213 20:08:31.329388   56906 kubeadm.go:310] 
	I1213 20:08:31.329446   56906 kubeadm.go:310] 	This error is likely caused by:
	I1213 20:08:31.329480   56906 kubeadm.go:310] 		- The kubelet is not running
	I1213 20:08:31.329620   56906 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 20:08:31.329632   56906 kubeadm.go:310] 
	I1213 20:08:31.329771   56906 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 20:08:31.329826   56906 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1213 20:08:31.329878   56906 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1213 20:08:31.329887   56906 kubeadm.go:310] 
	I1213 20:08:31.330057   56906 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1213 20:08:31.330195   56906 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1213 20:08:31.330206   56906 kubeadm.go:310] 
	I1213 20:08:31.330337   56906 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1213 20:08:31.330426   56906 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1213 20:08:31.330510   56906 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1213 20:08:31.330597   56906 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1213 20:08:31.330608   56906 kubeadm.go:310] 
	I1213 20:08:31.331355   56906 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 20:08:31.331444   56906 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1213 20:08:31.331524   56906 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1213 20:08:31.331648   56906 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-980370 localhost] and IPs [192.168.39.131 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-980370 localhost] and IPs [192.168.39.131 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-980370 localhost] and IPs [192.168.39.131 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-980370 localhost] and IPs [192.168.39.131 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 20:08:31.331695   56906 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 20:08:31.873726   56906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:08:31.887552   56906 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:08:31.897428   56906 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:08:31.897456   56906 kubeadm.go:157] found existing configuration files:
	
	I1213 20:08:31.897513   56906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:08:31.907137   56906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:08:31.907203   56906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:08:31.917566   56906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:08:31.926509   56906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:08:31.926579   56906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:08:31.936207   56906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:08:31.945151   56906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:08:31.945213   56906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:08:31.955531   56906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:08:31.964134   56906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:08:31.964187   56906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:08:31.975314   56906 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 20:08:32.194717   56906 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 20:10:28.418445   56906 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1213 20:10:28.418586   56906 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1213 20:10:28.420668   56906 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1213 20:10:28.420725   56906 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 20:10:28.420817   56906 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 20:10:28.420973   56906 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 20:10:28.421111   56906 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 20:10:28.421201   56906 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 20:10:28.423057   56906 out.go:235]   - Generating certificates and keys ...
	I1213 20:10:28.423164   56906 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 20:10:28.423236   56906 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 20:10:28.423373   56906 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 20:10:28.423485   56906 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1213 20:10:28.423569   56906 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 20:10:28.423651   56906 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1213 20:10:28.423744   56906 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1213 20:10:28.423849   56906 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1213 20:10:28.423962   56906 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 20:10:28.424067   56906 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 20:10:28.424123   56906 kubeadm.go:310] [certs] Using the existing "sa" key
	I1213 20:10:28.424202   56906 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 20:10:28.424274   56906 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 20:10:28.424324   56906 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 20:10:28.424414   56906 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 20:10:28.424512   56906 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 20:10:28.424637   56906 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 20:10:28.424742   56906 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 20:10:28.424805   56906 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 20:10:28.424898   56906 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 20:10:28.426547   56906 out.go:235]   - Booting up control plane ...
	I1213 20:10:28.426637   56906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 20:10:28.426704   56906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 20:10:28.426779   56906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 20:10:28.426889   56906 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 20:10:28.427089   56906 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 20:10:28.427180   56906 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1213 20:10:28.427285   56906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:10:28.427495   56906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:10:28.427562   56906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:10:28.427721   56906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:10:28.427820   56906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:10:28.428031   56906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:10:28.428117   56906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:10:28.428304   56906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:10:28.428367   56906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:10:28.428518   56906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:10:28.428525   56906 kubeadm.go:310] 
	I1213 20:10:28.428559   56906 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1213 20:10:28.428593   56906 kubeadm.go:310] 		timed out waiting for the condition
	I1213 20:10:28.428603   56906 kubeadm.go:310] 
	I1213 20:10:28.428634   56906 kubeadm.go:310] 	This error is likely caused by:
	I1213 20:10:28.428663   56906 kubeadm.go:310] 		- The kubelet is not running
	I1213 20:10:28.428754   56906 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 20:10:28.428761   56906 kubeadm.go:310] 
	I1213 20:10:28.428862   56906 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 20:10:28.428919   56906 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1213 20:10:28.428981   56906 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1213 20:10:28.428998   56906 kubeadm.go:310] 
	I1213 20:10:28.429117   56906 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1213 20:10:28.429227   56906 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1213 20:10:28.429237   56906 kubeadm.go:310] 
	I1213 20:10:28.429329   56906 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1213 20:10:28.429412   56906 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1213 20:10:28.429536   56906 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1213 20:10:28.429634   56906 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1213 20:10:28.429652   56906 kubeadm.go:310] 
	I1213 20:10:28.429706   56906 kubeadm.go:394] duration metric: took 3m55.219382933s to StartCluster
	I1213 20:10:28.429769   56906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:10:28.429834   56906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:10:28.469920   56906 cri.go:89] found id: ""
	I1213 20:10:28.469950   56906 logs.go:282] 0 containers: []
	W1213 20:10:28.469961   56906 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:10:28.469969   56906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:10:28.470030   56906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:10:28.513323   56906 cri.go:89] found id: ""
	I1213 20:10:28.513354   56906 logs.go:282] 0 containers: []
	W1213 20:10:28.513364   56906 logs.go:284] No container was found matching "etcd"
	I1213 20:10:28.513373   56906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:10:28.513435   56906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:10:28.550508   56906 cri.go:89] found id: ""
	I1213 20:10:28.550535   56906 logs.go:282] 0 containers: []
	W1213 20:10:28.550545   56906 logs.go:284] No container was found matching "coredns"
	I1213 20:10:28.550557   56906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:10:28.550622   56906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:10:28.598475   56906 cri.go:89] found id: ""
	I1213 20:10:28.598533   56906 logs.go:282] 0 containers: []
	W1213 20:10:28.598545   56906 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:10:28.598554   56906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:10:28.598614   56906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:10:28.637641   56906 cri.go:89] found id: ""
	I1213 20:10:28.637668   56906 logs.go:282] 0 containers: []
	W1213 20:10:28.637675   56906 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:10:28.637681   56906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:10:28.637731   56906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:10:28.673481   56906 cri.go:89] found id: ""
	I1213 20:10:28.673509   56906 logs.go:282] 0 containers: []
	W1213 20:10:28.673520   56906 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:10:28.673528   56906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:10:28.673584   56906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:10:28.706997   56906 cri.go:89] found id: ""
	I1213 20:10:28.707024   56906 logs.go:282] 0 containers: []
	W1213 20:10:28.707087   56906 logs.go:284] No container was found matching "kindnet"
	I1213 20:10:28.707102   56906 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:10:28.707120   56906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:10:28.809648   56906 logs.go:123] Gathering logs for container status ...
	I1213 20:10:28.809682   56906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:10:28.849443   56906 logs.go:123] Gathering logs for kubelet ...
	I1213 20:10:28.849475   56906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:10:28.905767   56906 logs.go:123] Gathering logs for dmesg ...
	I1213 20:10:28.905797   56906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:10:28.921382   56906 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:10:28.921412   56906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:10:29.063626   56906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 20:10:29.063656   56906 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1213 20:10:29.063724   56906 out.go:270] * 
	* 
	W1213 20:10:29.063787   56906 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 20:10:29.063802   56906 out.go:270] * 
	* 
	W1213 20:10:29.064803   56906 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 20:10:29.068263   56906 out.go:201] 
	W1213 20:10:29.069813   56906 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 20:10:29.069884   56906 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 20:10:29.069917   56906 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 20:10:29.072145   56906 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-980370 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-980370
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-980370: (6.330233348s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-980370 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-980370 status --format={{.Host}}: exit status 7 (66.479571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-980370 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1213 20:10:41.726923   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-980370 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.50463308s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-980370 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-980370 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-980370 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (114.213326ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-980370] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-980370
	    minikube start -p kubernetes-upgrade-980370 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9803702 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-980370 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-980370 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-980370 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (29.851509418s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-12-13 20:11:47.098932945 +0000 UTC m=+4205.374425884
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-980370 -n kubernetes-upgrade-980370
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-980370 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-980370 logs -n 25: (1.90897006s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-918860 sudo systemctl                        | auto-918860    | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC | 13 Dec 24 20:11 UTC |
	|         | cat docker --no-pager                                |                |         |         |                     |                     |
	| ssh     | -p auto-918860 sudo cat                              | auto-918860    | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC | 13 Dec 24 20:11 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p auto-918860 sudo docker                           | auto-918860    | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p auto-918860 sudo systemctl                        | auto-918860    | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC |                     |
	|         | status cri-docker --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-918860 sudo systemctl                        | auto-918860    | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC | 13 Dec 24 20:11 UTC |
	|         | cat cri-docker --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-918860 sudo cat                              | auto-918860    | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p auto-918860 sudo cat                              | auto-918860    | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC | 13 Dec 24 20:11 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p auto-918860 sudo                                  | auto-918860    | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC | 13 Dec 24 20:11 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p auto-918860 sudo systemctl                        | auto-918860    | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC |                     |
	|         | status containerd --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-918860 sudo systemctl                        | auto-918860    | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC | 13 Dec 24 20:11 UTC |
	|         | cat containerd --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-918860 sudo cat                              | auto-918860    | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC | 13 Dec 24 20:11 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p auto-918860 sudo cat                              | auto-918860    | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC | 13 Dec 24 20:11 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p auto-918860 sudo containerd                       | auto-918860    | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC | 13 Dec 24 20:11 UTC |
	|         | config dump                                          |                |         |         |                     |                     |
	| ssh     | -p auto-918860 sudo systemctl                        | auto-918860    | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC | 13 Dec 24 20:11 UTC |
	|         | status crio --all --full                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-918860 sudo systemctl                        | auto-918860    | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC | 13 Dec 24 20:11 UTC |
	|         | cat crio --no-pager                                  |                |         |         |                     |                     |
	| ssh     | -p auto-918860 sudo find                             | auto-918860    | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC | 13 Dec 24 20:11 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p auto-918860 sudo crio                             | auto-918860    | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC | 13 Dec 24 20:11 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p auto-918860                                       | auto-918860    | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC | 13 Dec 24 20:11 UTC |
	| start   | -p kindnet-918860                                    | kindnet-918860 | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC |                     |
	|         | --memory=3072                                        |                |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                          |                |         |         |                     |                     |
	|         | --container-runtime=crio                             |                |         |         |                     |                     |
	| pause   | -p pause-822439                                      | pause-822439   | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC | 13 Dec 24 20:11 UTC |
	|         | --alsologtostderr -v=5                               |                |         |         |                     |                     |
	| unpause | -p pause-822439                                      | pause-822439   | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC | 13 Dec 24 20:11 UTC |
	|         | --alsologtostderr -v=5                               |                |         |         |                     |                     |
	| pause   | -p pause-822439                                      | pause-822439   | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC | 13 Dec 24 20:11 UTC |
	|         | --alsologtostderr -v=5                               |                |         |         |                     |                     |
	| delete  | -p pause-822439                                      | pause-822439   | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC | 13 Dec 24 20:11 UTC |
	|         | --alsologtostderr -v=5                               |                |         |         |                     |                     |
	| delete  | -p pause-822439                                      | pause-822439   | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC | 13 Dec 24 20:11 UTC |
	| start   | -p calico-918860 --memory=3072                       | calico-918860  | jenkins | v1.34.0 | 13 Dec 24 20:11 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |                |         |         |                     |                     |
	|         | --container-runtime=crio                             |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 20:11:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 20:11:45.516270   63753 out.go:345] Setting OutFile to fd 1 ...
	I1213 20:11:45.516387   63753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:11:45.516397   63753 out.go:358] Setting ErrFile to fd 2...
	I1213 20:11:45.516402   63753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:11:45.516571   63753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
	I1213 20:11:45.517344   63753 out.go:352] Setting JSON to false
	I1213 20:11:45.518299   63753 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6848,"bootTime":1734113857,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 20:11:45.518389   63753 start.go:139] virtualization: kvm guest
	I1213 20:11:45.520390   63753 out.go:177] * [calico-918860] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 20:11:45.521999   63753 notify.go:220] Checking for updates...
	I1213 20:11:45.522030   63753 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 20:11:45.523284   63753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 20:11:45.524599   63753 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:11:45.525845   63753 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 20:11:45.527064   63753 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 20:11:45.528265   63753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 20:11:42.492135   62453 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:11:42.514722   62453 api_server.go:72] duration metric: took 1.023066981s to wait for apiserver process to appear ...
	I1213 20:11:42.514752   62453 api_server.go:88] waiting for apiserver healthz status ...
	I1213 20:11:42.514773   62453 api_server.go:253] Checking apiserver healthz at https://192.168.39.131:8443/healthz ...
	I1213 20:11:44.000936   62453 api_server.go:279] https://192.168.39.131:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 20:11:44.000969   62453 api_server.go:103] status: https://192.168.39.131:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 20:11:44.000986   62453 api_server.go:253] Checking apiserver healthz at https://192.168.39.131:8443/healthz ...
	I1213 20:11:44.039174   62453 api_server.go:279] https://192.168.39.131:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 20:11:44.039212   62453 api_server.go:103] status: https://192.168.39.131:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 20:11:44.039228   62453 api_server.go:253] Checking apiserver healthz at https://192.168.39.131:8443/healthz ...
	I1213 20:11:44.057813   62453 api_server.go:279] https://192.168.39.131:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 20:11:44.057837   62453 api_server.go:103] status: https://192.168.39.131:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 20:11:44.515413   62453 api_server.go:253] Checking apiserver healthz at https://192.168.39.131:8443/healthz ...
	I1213 20:11:44.521380   62453 api_server.go:279] https://192.168.39.131:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 20:11:44.521404   62453 api_server.go:103] status: https://192.168.39.131:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 20:11:45.014911   62453 api_server.go:253] Checking apiserver healthz at https://192.168.39.131:8443/healthz ...
	I1213 20:11:45.024606   62453 api_server.go:279] https://192.168.39.131:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 20:11:45.024632   62453 api_server.go:103] status: https://192.168.39.131:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 20:11:45.515437   62453 api_server.go:253] Checking apiserver healthz at https://192.168.39.131:8443/healthz ...
	I1213 20:11:45.522278   62453 api_server.go:279] https://192.168.39.131:8443/healthz returned 200:
	ok
	I1213 20:11:45.528779   62453 api_server.go:141] control plane version: v1.31.2
	I1213 20:11:45.528799   62453 api_server.go:131] duration metric: took 3.014040874s to wait for apiserver health ...
	I1213 20:11:45.528837   62453 cni.go:84] Creating CNI manager for ""
	I1213 20:11:45.528845   62453 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:11:45.530000   62453 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 20:11:45.529780   63753 config.go:182] Loaded profile config "cert-expiration-616278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:11:45.529870   63753 config.go:182] Loaded profile config "kindnet-918860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:11:45.529970   63753 config.go:182] Loaded profile config "kubernetes-upgrade-980370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:11:45.530049   63753 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 20:11:45.573427   63753 out.go:177] * Using the kvm2 driver based on user configuration
	I1213 20:11:45.574693   63753 start.go:297] selected driver: kvm2
	I1213 20:11:45.574709   63753 start.go:901] validating driver "kvm2" against <nil>
	I1213 20:11:45.574722   63753 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 20:11:45.575719   63753 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 20:11:45.575784   63753 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20090-12353/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 20:11:45.593186   63753 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1213 20:11:45.593243   63753 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 20:11:45.593489   63753 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 20:11:45.593526   63753 cni.go:84] Creating CNI manager for "calico"
	I1213 20:11:45.593532   63753 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1213 20:11:45.593615   63753 start.go:340] cluster config:
	{Name:calico-918860 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:calico-918860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:11:45.593774   63753 iso.go:125] acquiring lock: {Name:mkd84f6661a5214d8c2d3a40ad448351a88bfd1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 20:11:45.595548   63753 out.go:177] * Starting "calico-918860" primary control-plane node in "calico-918860" cluster
	I1213 20:11:45.531096   62453 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 20:11:45.542257   62453 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 20:11:45.562969   62453 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 20:11:45.563064   62453 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 20:11:45.563083   62453 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 20:11:45.575749   62453 system_pods.go:59] 8 kube-system pods found
	I1213 20:11:45.575780   62453 system_pods.go:61] "coredns-7c65d6cfc9-6vrmh" [c5b5400d-2c1b-490c-b032-8a0caf974bc0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 20:11:45.575796   62453 system_pods.go:61] "coredns-7c65d6cfc9-jf2d4" [98adeccd-1fab-4b83-bee7-c87cceb68777] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 20:11:45.575810   62453 system_pods.go:61] "etcd-kubernetes-upgrade-980370" [719edb44-2df6-4e5e-86c7-ca0147d7f077] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 20:11:45.575822   62453 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-980370" [7545330a-b585-429b-acec-928edf2f78ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 20:11:45.575837   62453 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-980370" [3db32404-f2a1-4894-b13e-7165647616d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 20:11:45.575850   62453 system_pods.go:61] "kube-proxy-swjtc" [f5d07f93-7de7-44f5-86a4-2f7477b820ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 20:11:45.575862   62453 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-980370" [9b61ea9b-9513-4bc9-acba-a6b7207b5248] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 20:11:45.575870   62453 system_pods.go:61] "storage-provisioner" [c0e9a8ae-fbe1-412c-ab99-d1b73c33c98a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 20:11:45.575881   62453 system_pods.go:74] duration metric: took 12.8845ms to wait for pod list to return data ...
	I1213 20:11:45.575892   62453 node_conditions.go:102] verifying NodePressure condition ...
	I1213 20:11:45.580445   62453 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 20:11:45.580473   62453 node_conditions.go:123] node cpu capacity is 2
	I1213 20:11:45.580485   62453 node_conditions.go:105] duration metric: took 4.587479ms to run NodePressure ...
	I1213 20:11:45.580504   62453 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:11:45.939789   62453 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 20:11:45.952375   62453 ops.go:34] apiserver oom_adj: -16
	I1213 20:11:45.952397   62453 kubeadm.go:597] duration metric: took 20.741790836s to restartPrimaryControlPlane
	I1213 20:11:45.952406   62453 kubeadm.go:394] duration metric: took 20.870870965s to StartCluster
	I1213 20:11:45.952420   62453 settings.go:142] acquiring lock: {Name:mkc90da34b53323b31b6e69f8fab5ad7b1bdb254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:11:45.952515   62453 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:11:45.953350   62453 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/kubeconfig: {Name:mkeeacf16d2513309766df13b67a96dd252bc4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:11:45.953586   62453 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.131 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 20:11:45.953668   62453 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 20:11:45.953762   62453 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-980370"
	I1213 20:11:45.953783   62453 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-980370"
	W1213 20:11:45.953792   62453 addons.go:243] addon storage-provisioner should already be in state true
	I1213 20:11:45.953794   62453 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-980370"
	I1213 20:11:45.953817   62453 host.go:66] Checking if "kubernetes-upgrade-980370" exists ...
	I1213 20:11:45.953830   62453 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-980370"
	I1213 20:11:45.953849   62453 config.go:182] Loaded profile config "kubernetes-upgrade-980370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:11:45.954239   62453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:11:45.954239   62453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:11:45.954286   62453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:11:45.954293   62453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:11:45.955950   62453 out.go:177] * Verifying Kubernetes components...
	I1213 20:11:45.957209   62453 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:11:45.969454   62453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I1213 20:11:45.969912   62453 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:11:45.970351   62453 main.go:141] libmachine: Using API Version  1
	I1213 20:11:45.970374   62453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:11:45.970717   62453 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:11:45.970897   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetState
	I1213 20:11:45.973718   62453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38063
	I1213 20:11:45.973688   62453 kapi.go:59] client config for kubernetes-upgrade-980370: &rest.Config{Host:"https://192.168.39.131:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/client.crt", KeyFile:"/home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kubernetes-upgrade-980370/client.key", CAFile:"/home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil
), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243da20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 20:11:45.974016   62453 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-980370"
	W1213 20:11:45.974036   62453 addons.go:243] addon default-storageclass should already be in state true
	I1213 20:11:45.974062   62453 host.go:66] Checking if "kubernetes-upgrade-980370" exists ...
	I1213 20:11:45.974166   62453 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:11:45.974431   62453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:11:45.974481   62453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:11:45.974656   62453 main.go:141] libmachine: Using API Version  1
	I1213 20:11:45.974680   62453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:11:45.975080   62453 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:11:45.975511   62453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:11:45.975541   62453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:11:45.989424   62453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44603
	I1213 20:11:45.989852   62453 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:11:45.990305   62453 main.go:141] libmachine: Using API Version  1
	I1213 20:11:45.990328   62453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:11:45.990627   62453 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:11:45.991204   62453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:11:45.991244   62453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:11:45.993882   62453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34371
	I1213 20:11:45.994345   62453 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:11:45.994790   62453 main.go:141] libmachine: Using API Version  1
	I1213 20:11:45.994808   62453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:11:45.995156   62453 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:11:45.995450   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetState
	I1213 20:11:45.997420   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .DriverName
	I1213 20:11:45.999040   62453 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:11:46.000211   62453 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:11:46.000231   62453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 20:11:46.000252   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHHostname
	I1213 20:11:46.004336   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:11:46.004912   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:2c:2f", ip: ""} in network mk-kubernetes-upgrade-980370: {Iface:virbr1 ExpiryTime:2024-12-13 21:06:15 +0000 UTC Type:0 Mac:52:54:00:b8:2c:2f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:kubernetes-upgrade-980370 Clientid:01:52:54:00:b8:2c:2f}
	I1213 20:11:46.004954   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:11:46.005093   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHPort
	I1213 20:11:46.005340   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHKeyPath
	I1213 20:11:46.005517   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHUsername
	I1213 20:11:46.005760   62453 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/kubernetes-upgrade-980370/id_rsa Username:docker}
	I1213 20:11:46.015976   62453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35839
	I1213 20:11:46.016401   62453 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:11:46.016915   62453 main.go:141] libmachine: Using API Version  1
	I1213 20:11:46.016938   62453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:11:46.017241   62453 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:11:46.017439   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetState
	I1213 20:11:46.019105   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .DriverName
	I1213 20:11:46.019317   62453 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 20:11:46.019336   62453 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 20:11:46.019353   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHHostname
	I1213 20:11:46.022531   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:11:46.022928   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:2c:2f", ip: ""} in network mk-kubernetes-upgrade-980370: {Iface:virbr1 ExpiryTime:2024-12-13 21:06:15 +0000 UTC Type:0 Mac:52:54:00:b8:2c:2f Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:kubernetes-upgrade-980370 Clientid:01:52:54:00:b8:2c:2f}
	I1213 20:11:46.022960   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | domain kubernetes-upgrade-980370 has defined IP address 192.168.39.131 and MAC address 52:54:00:b8:2c:2f in network mk-kubernetes-upgrade-980370
	I1213 20:11:46.023140   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHPort
	I1213 20:11:46.023322   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHKeyPath
	I1213 20:11:46.023437   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .GetSSHUsername
	I1213 20:11:46.023593   62453 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/kubernetes-upgrade-980370/id_rsa Username:docker}
	I1213 20:11:46.167591   62453 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 20:11:46.185489   62453 api_server.go:52] waiting for apiserver process to appear ...
	I1213 20:11:46.185573   62453 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:11:46.206718   62453 api_server.go:72] duration metric: took 253.095164ms to wait for apiserver process to appear ...
	I1213 20:11:46.206746   62453 api_server.go:88] waiting for apiserver healthz status ...
	I1213 20:11:46.206763   62453 api_server.go:253] Checking apiserver healthz at https://192.168.39.131:8443/healthz ...
	I1213 20:11:46.215706   62453 api_server.go:279] https://192.168.39.131:8443/healthz returned 200:
	ok
	I1213 20:11:46.216697   62453 api_server.go:141] control plane version: v1.31.2
	I1213 20:11:46.216716   62453 api_server.go:131] duration metric: took 9.963658ms to wait for apiserver health ...
	I1213 20:11:46.216723   62453 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 20:11:46.222435   62453 system_pods.go:59] 8 kube-system pods found
	I1213 20:11:46.222470   62453 system_pods.go:61] "coredns-7c65d6cfc9-6vrmh" [c5b5400d-2c1b-490c-b032-8a0caf974bc0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 20:11:46.222481   62453 system_pods.go:61] "coredns-7c65d6cfc9-jf2d4" [98adeccd-1fab-4b83-bee7-c87cceb68777] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 20:11:46.222492   62453 system_pods.go:61] "etcd-kubernetes-upgrade-980370" [719edb44-2df6-4e5e-86c7-ca0147d7f077] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 20:11:46.222501   62453 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-980370" [7545330a-b585-429b-acec-928edf2f78ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 20:11:46.222512   62453 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-980370" [3db32404-f2a1-4894-b13e-7165647616d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 20:11:46.222519   62453 system_pods.go:61] "kube-proxy-swjtc" [f5d07f93-7de7-44f5-86a4-2f7477b820ab] Running
	I1213 20:11:46.222529   62453 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-980370" [9b61ea9b-9513-4bc9-acba-a6b7207b5248] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 20:11:46.222536   62453 system_pods.go:61] "storage-provisioner" [c0e9a8ae-fbe1-412c-ab99-d1b73c33c98a] Running
	I1213 20:11:46.222544   62453 system_pods.go:74] duration metric: took 5.813936ms to wait for pod list to return data ...
	I1213 20:11:46.222558   62453 kubeadm.go:582] duration metric: took 268.939257ms to wait for: map[apiserver:true system_pods:true]
	I1213 20:11:46.222573   62453 node_conditions.go:102] verifying NodePressure condition ...
	I1213 20:11:46.224500   62453 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 20:11:46.224519   62453 node_conditions.go:123] node cpu capacity is 2
	I1213 20:11:46.224527   62453 node_conditions.go:105] duration metric: took 1.947241ms to run NodePressure ...
	I1213 20:11:46.224536   62453 start.go:241] waiting for startup goroutines ...
	I1213 20:11:46.251801   62453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 20:11:46.377582   62453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:11:46.429599   62453 main.go:141] libmachine: Making call to close driver server
	I1213 20:11:46.429625   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .Close
	I1213 20:11:46.429910   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) DBG | Closing plugin on server side
	I1213 20:11:46.429918   62453 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:11:46.429931   62453 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:11:46.429939   62453 main.go:141] libmachine: Making call to close driver server
	I1213 20:11:46.429946   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .Close
	I1213 20:11:46.430257   62453 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:11:46.430287   62453 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:11:46.436411   62453 main.go:141] libmachine: Making call to close driver server
	I1213 20:11:46.436430   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .Close
	I1213 20:11:46.436673   62453 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:11:46.436691   62453 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:11:47.030306   62453 main.go:141] libmachine: Making call to close driver server
	I1213 20:11:47.030337   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .Close
	I1213 20:11:47.030614   62453 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:11:47.030637   62453 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:11:47.030648   62453 main.go:141] libmachine: Making call to close driver server
	I1213 20:11:47.030656   62453 main.go:141] libmachine: (kubernetes-upgrade-980370) Calling .Close
	I1213 20:11:47.030891   62453 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:11:47.030908   62453 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:11:47.032959   62453 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1213 20:11:47.034088   62453 addons.go:510] duration metric: took 1.08042661s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1213 20:11:47.034121   62453 start.go:246] waiting for cluster config update ...
	I1213 20:11:47.034130   62453 start.go:255] writing updated cluster config ...
	I1213 20:11:47.034349   62453 ssh_runner.go:195] Run: rm -f paused
	I1213 20:11:47.084045   62453 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1213 20:11:47.085790   62453 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-980370" cluster and "default" namespace by default
	I1213 20:11:45.596612   63753 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 20:11:45.596657   63753 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1213 20:11:45.596669   63753 cache.go:56] Caching tarball of preloaded images
	I1213 20:11:45.596769   63753 preload.go:172] Found /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 20:11:45.596784   63753 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1213 20:11:45.596896   63753 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/config.json ...
	I1213 20:11:45.596916   63753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/config.json: {Name:mk137c61538aeaab4e45198ff4dcc77c70755d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:11:45.597051   63753 start.go:360] acquireMachinesLock for calico-918860: {Name:mkc278ae0927dbec7538ca4f7c13001e5f3abc49 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 20:11:47.116806   63753 start.go:364] duration metric: took 1.519725451s to acquireMachinesLock for "calico-918860"
	I1213 20:11:47.116864   63753 start.go:93] Provisioning new machine with config: &{Name:calico-918860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:calico-918860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 20:11:47.116962   63753 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Dec 13 20:11:47 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:47.942415087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734120707942391386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=052a67cb-56bd-4c79-86a9-ae0f6207a5de name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:11:47 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:47.943021822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=606f4aaf-5817-4ce2-a1b3-b610dea7592f name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:11:47 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:47.943072349Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=606f4aaf-5817-4ce2-a1b3-b610dea7592f name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:11:47 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:47.943430472Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd96085405361fe29fa21ce1d9f58c66ebed8d84588f78fff6b53045972416c7,PodSandboxId:3475c7951364544f9efced97be0eb3e16b2baffda2ccc1e61410d5a6151df84b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1734120704790843231,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-swjtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d07f93-7de7-44f5-86a4-2f7477b820ab,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272475bcf7f1e4c8e8dbe04ab4d3a3270360eef6fbd54befbbc1d633121dc96b,PodSandboxId:a46faf2f6f4ad78bb31e17b052c0666bffbbd0418282c1a3db3b66007dce22ac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734120704788173403,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0e9a8ae-fbe1-412c-ab99-d1b73c33c98a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32986509583b5140fd5adccf1ec1bb54f14fe6cce5dc7331db819bb602f40ebd,PodSandboxId:9972e3b359557b4923c4daf3104dac7a9c551376c51ade286172bc843c3f0b5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734120704777974766,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6vrmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5b5400d-2c1b-490c-b032-8a0caf974bc0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:464f7b2b1ec7f54606057b5dfcd96d6f881ed374479eeb80cda506844dc9c156,PodSandboxId:9f4fcc1c6b2e597b9121967c6f6a29bf38ea9eb51a0f02abc46fff426df38a21,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734120704765500400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf2d4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98adeccd-1fab-4b83-bee7-c8
7cceb68777,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f82984a47b4cbf142292ae511dd86f6a8088c33775d64f6fe2e991c9a3d330a,PodSandboxId:9510431685e71f06a977c7b58b2c07128ba23c83f637c8858e1c9707af81c26e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1734120701952122817,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe0df0357fdec884ec8cd833260b09d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39044a484e5d5382da10c9731d52c9c5d2bd2fe061e2130b8e79e6d5d799d562,PodSandboxId:bb321762b99e6ff02af3ea1e4f6422ee913e5fd63c03a5afe16bffa2e901546b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1734120701921126
208,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46072b83659953ee4a4aed8c92eecdc2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1839bd7c9612f1cf59e538cf5f59494c4c7b8390c8cd5e7ec79ef6565d165b1,PodSandboxId:a46faf2f6f4ad78bb31e17b052c0666bffbbd0418282c1a3db3b66007dce22ac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt
:1734120698069196101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0e9a8ae-fbe1-412c-ab99-d1b73c33c98a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dbc5f11696379908016ed8b82dbd2724eba7101e79f966b6d5d9b5a62f94de8,PodSandboxId:8429de264662082d5ce6c4e7cffc6d184685121cdce05b235a12f724b80b6aa7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1734120698068477
897,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b22774d31a66da17396c20e01a1445,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d2f978875bd2efb49b939dd16305541f435c8a86f253f949d6604d2ce4e17e4,PodSandboxId:a2e3c861702e744da4f42def4450531cfada309b6910116c55c7ad517e234499,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1734120698050247086,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8b2f1c95640f0fff9f118a2eb853997,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203430ed08b32739a19734e6c6b74b3a176e91e7c002ca6592a51cf481e64b5f,PodSandboxId:9972e3b359557b4923c4daf3104dac7a9c551376c51ade286172bc843c3f0b5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1734120684398984495,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6vrmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5b5400d-2c1b-490c-b032-8a0caf974bc0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:007ad2c6c41c5d2d072a4a5943b3a8d847576ed0b3dcabb9435836e1df698f09,PodSandboxId:9f4fcc1c6b2e597b9121967c6f6a29bf38ea9eb51a0f02abc46fff426df38a21,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1734120684295610583,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf2d4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98adeccd-1fab-4b83-bee7-c87cceb68777,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6798d1c0f291ca950ee63c3946647f7d5b0c2e6e483f03cf7dd270f2d4dbccb,PodSandboxId:730d7124c3a318c3698ab538ebe68afe8e93b374d98fb7897b442950620
bfd58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1734120680903556583,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-swjtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d07f93-7de7-44f5-86a4-2f7477b820ab,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a94e1132b5157ee1f619dcb366b4b8fbd521a870b4b2259ca3955ef0b58723,PodSandboxId:f6572ec698a70893a1d952cbdf1732205f6c445bdee5c61097295a0ba1f06258,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1734120680958899159,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8b2f1c95640f0fff9f118a2eb853997,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42846fefaa1584af20b23584f587a5a4d11045316821d5517b76aaef346bf322,PodSandboxId:9af0942e65761431783caf94ebce907a50d096f15d8a22ce1567f82c8f961fd2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1734120680884870039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46072b83659953ee4a4aed8c92eecdc2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795bd9144cd918a1ade296ccc92b66273974ad9169262af5b1e641bd222a9375,PodSandboxId:66bf7139cde704876d68fc40d486c0c40981fc3220bba57f2113f7e8384d2b21,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1734120680729625922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe0df0357fdec884ec8cd833260b09d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128ec710f342a563ce34ae78cbfcafee41318e744a709606ff0370081f95d6a9,PodSandboxId:1ea4a603365f66551f012159dc1a853cd73bca920f5621d5bf8160df1ed49d5f,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1734120680771073727,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b22774d31a66da17396c20e01a1445,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=606f4aaf-5817-4ce2-a1b3-b610dea7592f name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:11:47 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:47.994866962Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d04be328-baf9-4d58-a59c-63bc48542612 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:11:47 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:47.995000292Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d04be328-baf9-4d58-a59c-63bc48542612 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:11:47 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:47.997053047Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d74bc939-2659-421f-9a74-98fbe7c409cf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:11:47 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:47.997577321Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734120707997552705,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d74bc939-2659-421f-9a74-98fbe7c409cf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:11:47 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:47.998467252Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e63ca2a-3406-45d4-8596-c4bdb97bb66b name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:11:47 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:47.998550635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e63ca2a-3406-45d4-8596-c4bdb97bb66b name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:11:48 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:47.999362640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd96085405361fe29fa21ce1d9f58c66ebed8d84588f78fff6b53045972416c7,PodSandboxId:3475c7951364544f9efced97be0eb3e16b2baffda2ccc1e61410d5a6151df84b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1734120704790843231,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-swjtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d07f93-7de7-44f5-86a4-2f7477b820ab,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272475bcf7f1e4c8e8dbe04ab4d3a3270360eef6fbd54befbbc1d633121dc96b,PodSandboxId:a46faf2f6f4ad78bb31e17b052c0666bffbbd0418282c1a3db3b66007dce22ac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734120704788173403,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0e9a8ae-fbe1-412c-ab99-d1b73c33c98a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32986509583b5140fd5adccf1ec1bb54f14fe6cce5dc7331db819bb602f40ebd,PodSandboxId:9972e3b359557b4923c4daf3104dac7a9c551376c51ade286172bc843c3f0b5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734120704777974766,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6vrmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5b5400d-2c1b-490c-b032-8a0caf974bc0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:464f7b2b1ec7f54606057b5dfcd96d6f881ed374479eeb80cda506844dc9c156,PodSandboxId:9f4fcc1c6b2e597b9121967c6f6a29bf38ea9eb51a0f02abc46fff426df38a21,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734120704765500400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf2d4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98adeccd-1fab-4b83-bee7-c8
7cceb68777,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f82984a47b4cbf142292ae511dd86f6a8088c33775d64f6fe2e991c9a3d330a,PodSandboxId:9510431685e71f06a977c7b58b2c07128ba23c83f637c8858e1c9707af81c26e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1734120701952122817,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe0df0357fdec884ec8cd833260b09d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39044a484e5d5382da10c9731d52c9c5d2bd2fe061e2130b8e79e6d5d799d562,PodSandboxId:bb321762b99e6ff02af3ea1e4f6422ee913e5fd63c03a5afe16bffa2e901546b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1734120701921126
208,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46072b83659953ee4a4aed8c92eecdc2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1839bd7c9612f1cf59e538cf5f59494c4c7b8390c8cd5e7ec79ef6565d165b1,PodSandboxId:a46faf2f6f4ad78bb31e17b052c0666bffbbd0418282c1a3db3b66007dce22ac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt
:1734120698069196101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0e9a8ae-fbe1-412c-ab99-d1b73c33c98a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dbc5f11696379908016ed8b82dbd2724eba7101e79f966b6d5d9b5a62f94de8,PodSandboxId:8429de264662082d5ce6c4e7cffc6d184685121cdce05b235a12f724b80b6aa7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1734120698068477
897,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b22774d31a66da17396c20e01a1445,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d2f978875bd2efb49b939dd16305541f435c8a86f253f949d6604d2ce4e17e4,PodSandboxId:a2e3c861702e744da4f42def4450531cfada309b6910116c55c7ad517e234499,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1734120698050247086,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8b2f1c95640f0fff9f118a2eb853997,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203430ed08b32739a19734e6c6b74b3a176e91e7c002ca6592a51cf481e64b5f,PodSandboxId:9972e3b359557b4923c4daf3104dac7a9c551376c51ade286172bc843c3f0b5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1734120684398984495,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6vrmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5b5400d-2c1b-490c-b032-8a0caf974bc0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:007ad2c6c41c5d2d072a4a5943b3a8d847576ed0b3dcabb9435836e1df698f09,PodSandboxId:9f4fcc1c6b2e597b9121967c6f6a29bf38ea9eb51a0f02abc46fff426df38a21,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1734120684295610583,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf2d4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98adeccd-1fab-4b83-bee7-c87cceb68777,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6798d1c0f291ca950ee63c3946647f7d5b0c2e6e483f03cf7dd270f2d4dbccb,PodSandboxId:730d7124c3a318c3698ab538ebe68afe8e93b374d98fb7897b442950620
bfd58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1734120680903556583,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-swjtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d07f93-7de7-44f5-86a4-2f7477b820ab,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a94e1132b5157ee1f619dcb366b4b8fbd521a870b4b2259ca3955ef0b58723,PodSandboxId:f6572ec698a70893a1d952cbdf1732205f6c445bdee5c61097295a0ba1f06258,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1734120680958899159,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8b2f1c95640f0fff9f118a2eb853997,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42846fefaa1584af20b23584f587a5a4d11045316821d5517b76aaef346bf322,PodSandboxId:9af0942e65761431783caf94ebce907a50d096f15d8a22ce1567f82c8f961fd2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1734120680884870039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46072b83659953ee4a4aed8c92eecdc2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795bd9144cd918a1ade296ccc92b66273974ad9169262af5b1e641bd222a9375,PodSandboxId:66bf7139cde704876d68fc40d486c0c40981fc3220bba57f2113f7e8384d2b21,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1734120680729625922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe0df0357fdec884ec8cd833260b09d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128ec710f342a563ce34ae78cbfcafee41318e744a709606ff0370081f95d6a9,PodSandboxId:1ea4a603365f66551f012159dc1a853cd73bca920f5621d5bf8160df1ed49d5f,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1734120680771073727,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b22774d31a66da17396c20e01a1445,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e63ca2a-3406-45d4-8596-c4bdb97bb66b name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:11:48 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:48.056262829Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b86715b5-4e04-45c7-abc5-73ef1f05a65f name=/runtime.v1.RuntimeService/Version
	Dec 13 20:11:48 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:48.056352893Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b86715b5-4e04-45c7-abc5-73ef1f05a65f name=/runtime.v1.RuntimeService/Version
	Dec 13 20:11:48 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:48.057456008Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b29cde8a-d113-4a97-b9f5-861e2e9a49c3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:11:48 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:48.057924827Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734120708057897885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b29cde8a-d113-4a97-b9f5-861e2e9a49c3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:11:48 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:48.058628401Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1e2d8db-ffdc-4ab0-9908-1ce63b269caf name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:11:48 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:48.058685171Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1e2d8db-ffdc-4ab0-9908-1ce63b269caf name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:11:48 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:48.059026424Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd96085405361fe29fa21ce1d9f58c66ebed8d84588f78fff6b53045972416c7,PodSandboxId:3475c7951364544f9efced97be0eb3e16b2baffda2ccc1e61410d5a6151df84b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1734120704790843231,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-swjtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d07f93-7de7-44f5-86a4-2f7477b820ab,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272475bcf7f1e4c8e8dbe04ab4d3a3270360eef6fbd54befbbc1d633121dc96b,PodSandboxId:a46faf2f6f4ad78bb31e17b052c0666bffbbd0418282c1a3db3b66007dce22ac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734120704788173403,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0e9a8ae-fbe1-412c-ab99-d1b73c33c98a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32986509583b5140fd5adccf1ec1bb54f14fe6cce5dc7331db819bb602f40ebd,PodSandboxId:9972e3b359557b4923c4daf3104dac7a9c551376c51ade286172bc843c3f0b5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734120704777974766,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6vrmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5b5400d-2c1b-490c-b032-8a0caf974bc0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:464f7b2b1ec7f54606057b5dfcd96d6f881ed374479eeb80cda506844dc9c156,PodSandboxId:9f4fcc1c6b2e597b9121967c6f6a29bf38ea9eb51a0f02abc46fff426df38a21,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734120704765500400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf2d4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98adeccd-1fab-4b83-bee7-c8
7cceb68777,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f82984a47b4cbf142292ae511dd86f6a8088c33775d64f6fe2e991c9a3d330a,PodSandboxId:9510431685e71f06a977c7b58b2c07128ba23c83f637c8858e1c9707af81c26e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1734120701952122817,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe0df0357fdec884ec8cd833260b09d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39044a484e5d5382da10c9731d52c9c5d2bd2fe061e2130b8e79e6d5d799d562,PodSandboxId:bb321762b99e6ff02af3ea1e4f6422ee913e5fd63c03a5afe16bffa2e901546b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1734120701921126
208,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46072b83659953ee4a4aed8c92eecdc2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1839bd7c9612f1cf59e538cf5f59494c4c7b8390c8cd5e7ec79ef6565d165b1,PodSandboxId:a46faf2f6f4ad78bb31e17b052c0666bffbbd0418282c1a3db3b66007dce22ac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt
:1734120698069196101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0e9a8ae-fbe1-412c-ab99-d1b73c33c98a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dbc5f11696379908016ed8b82dbd2724eba7101e79f966b6d5d9b5a62f94de8,PodSandboxId:8429de264662082d5ce6c4e7cffc6d184685121cdce05b235a12f724b80b6aa7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1734120698068477
897,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b22774d31a66da17396c20e01a1445,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d2f978875bd2efb49b939dd16305541f435c8a86f253f949d6604d2ce4e17e4,PodSandboxId:a2e3c861702e744da4f42def4450531cfada309b6910116c55c7ad517e234499,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1734120698050247086,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8b2f1c95640f0fff9f118a2eb853997,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203430ed08b32739a19734e6c6b74b3a176e91e7c002ca6592a51cf481e64b5f,PodSandboxId:9972e3b359557b4923c4daf3104dac7a9c551376c51ade286172bc843c3f0b5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1734120684398984495,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6vrmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5b5400d-2c1b-490c-b032-8a0caf974bc0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:007ad2c6c41c5d2d072a4a5943b3a8d847576ed0b3dcabb9435836e1df698f09,PodSandboxId:9f4fcc1c6b2e597b9121967c6f6a29bf38ea9eb51a0f02abc46fff426df38a21,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1734120684295610583,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf2d4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98adeccd-1fab-4b83-bee7-c87cceb68777,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6798d1c0f291ca950ee63c3946647f7d5b0c2e6e483f03cf7dd270f2d4dbccb,PodSandboxId:730d7124c3a318c3698ab538ebe68afe8e93b374d98fb7897b442950620
bfd58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1734120680903556583,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-swjtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d07f93-7de7-44f5-86a4-2f7477b820ab,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a94e1132b5157ee1f619dcb366b4b8fbd521a870b4b2259ca3955ef0b58723,PodSandboxId:f6572ec698a70893a1d952cbdf1732205f6c445bdee5c61097295a0ba1f06258,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1734120680958899159,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8b2f1c95640f0fff9f118a2eb853997,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42846fefaa1584af20b23584f587a5a4d11045316821d5517b76aaef346bf322,PodSandboxId:9af0942e65761431783caf94ebce907a50d096f15d8a22ce1567f82c8f961fd2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1734120680884870039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46072b83659953ee4a4aed8c92eecdc2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795bd9144cd918a1ade296ccc92b66273974ad9169262af5b1e641bd222a9375,PodSandboxId:66bf7139cde704876d68fc40d486c0c40981fc3220bba57f2113f7e8384d2b21,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1734120680729625922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe0df0357fdec884ec8cd833260b09d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128ec710f342a563ce34ae78cbfcafee41318e744a709606ff0370081f95d6a9,PodSandboxId:1ea4a603365f66551f012159dc1a853cd73bca920f5621d5bf8160df1ed49d5f,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1734120680771073727,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b22774d31a66da17396c20e01a1445,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1e2d8db-ffdc-4ab0-9908-1ce63b269caf name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:11:48 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:48.106926212Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e47d74e-ed2f-439b-8bf8-0e3d5322bb47 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:11:48 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:48.107055480Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e47d74e-ed2f-439b-8bf8-0e3d5322bb47 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:11:48 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:48.108513236Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bd9374d6-a9af-46a9-9618-05e187e13612 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:11:48 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:48.109470513Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734120708109396094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd9374d6-a9af-46a9-9618-05e187e13612 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:11:48 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:48.110107086Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80cd7767-019b-437a-9a2c-9c8d1335cd4b name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:11:48 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:48.110183599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80cd7767-019b-437a-9a2c-9c8d1335cd4b name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:11:48 kubernetes-upgrade-980370 crio[3159]: time="2024-12-13 20:11:48.111114164Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd96085405361fe29fa21ce1d9f58c66ebed8d84588f78fff6b53045972416c7,PodSandboxId:3475c7951364544f9efced97be0eb3e16b2baffda2ccc1e61410d5a6151df84b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1734120704790843231,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-swjtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d07f93-7de7-44f5-86a4-2f7477b820ab,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272475bcf7f1e4c8e8dbe04ab4d3a3270360eef6fbd54befbbc1d633121dc96b,PodSandboxId:a46faf2f6f4ad78bb31e17b052c0666bffbbd0418282c1a3db3b66007dce22ac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734120704788173403,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0e9a8ae-fbe1-412c-ab99-d1b73c33c98a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32986509583b5140fd5adccf1ec1bb54f14fe6cce5dc7331db819bb602f40ebd,PodSandboxId:9972e3b359557b4923c4daf3104dac7a9c551376c51ade286172bc843c3f0b5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734120704777974766,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6vrmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5b5400d-2c1b-490c-b032-8a0caf974bc0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:464f7b2b1ec7f54606057b5dfcd96d6f881ed374479eeb80cda506844dc9c156,PodSandboxId:9f4fcc1c6b2e597b9121967c6f6a29bf38ea9eb51a0f02abc46fff426df38a21,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734120704765500400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf2d4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98adeccd-1fab-4b83-bee7-c8
7cceb68777,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f82984a47b4cbf142292ae511dd86f6a8088c33775d64f6fe2e991c9a3d330a,PodSandboxId:9510431685e71f06a977c7b58b2c07128ba23c83f637c8858e1c9707af81c26e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1734120701952122817,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe0df0357fdec884ec8cd833260b09d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39044a484e5d5382da10c9731d52c9c5d2bd2fe061e2130b8e79e6d5d799d562,PodSandboxId:bb321762b99e6ff02af3ea1e4f6422ee913e5fd63c03a5afe16bffa2e901546b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1734120701921126
208,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46072b83659953ee4a4aed8c92eecdc2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1839bd7c9612f1cf59e538cf5f59494c4c7b8390c8cd5e7ec79ef6565d165b1,PodSandboxId:a46faf2f6f4ad78bb31e17b052c0666bffbbd0418282c1a3db3b66007dce22ac,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt
:1734120698069196101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0e9a8ae-fbe1-412c-ab99-d1b73c33c98a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dbc5f11696379908016ed8b82dbd2724eba7101e79f966b6d5d9b5a62f94de8,PodSandboxId:8429de264662082d5ce6c4e7cffc6d184685121cdce05b235a12f724b80b6aa7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1734120698068477
897,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b22774d31a66da17396c20e01a1445,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d2f978875bd2efb49b939dd16305541f435c8a86f253f949d6604d2ce4e17e4,PodSandboxId:a2e3c861702e744da4f42def4450531cfada309b6910116c55c7ad517e234499,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1734120698050247086,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8b2f1c95640f0fff9f118a2eb853997,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203430ed08b32739a19734e6c6b74b3a176e91e7c002ca6592a51cf481e64b5f,PodSandboxId:9972e3b359557b4923c4daf3104dac7a9c551376c51ade286172bc843c3f0b5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1734120684398984495,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6vrmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5b5400d-2c1b-490c-b032-8a0caf974bc0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:007ad2c6c41c5d2d072a4a5943b3a8d847576ed0b3dcabb9435836e1df698f09,PodSandboxId:9f4fcc1c6b2e597b9121967c6f6a29bf38ea9eb51a0f02abc46fff426df38a21,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1734120684295610583,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jf2d4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98adeccd-1fab-4b83-bee7-c87cceb68777,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6798d1c0f291ca950ee63c3946647f7d5b0c2e6e483f03cf7dd270f2d4dbccb,PodSandboxId:730d7124c3a318c3698ab538ebe68afe8e93b374d98fb7897b442950620
bfd58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1734120680903556583,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-swjtc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d07f93-7de7-44f5-86a4-2f7477b820ab,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a94e1132b5157ee1f619dcb366b4b8fbd521a870b4b2259ca3955ef0b58723,PodSandboxId:f6572ec698a70893a1d952cbdf1732205f6c445bdee5c61097295a0ba1f06258,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1734120680958899159,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8b2f1c95640f0fff9f118a2eb853997,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42846fefaa1584af20b23584f587a5a4d11045316821d5517b76aaef346bf322,PodSandboxId:9af0942e65761431783caf94ebce907a50d096f15d8a22ce1567f82c8f961fd2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1734120680884870039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46072b83659953ee4a4aed8c92eecdc2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:795bd9144cd918a1ade296ccc92b66273974ad9169262af5b1e641bd222a9375,PodSandboxId:66bf7139cde704876d68fc40d486c0c40981fc3220bba57f2113f7e8384d2b21,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1734120680729625922,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe0df0357fdec884ec8cd833260b09d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128ec710f342a563ce34ae78cbfcafee41318e744a709606ff0370081f95d6a9,PodSandboxId:1ea4a603365f66551f012159dc1a853cd73bca920f5621d5bf8160df1ed49d5f,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1734120680771073727,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-980370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b22774d31a66da17396c20e01a1445,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80cd7767-019b-437a-9a2c-9c8d1335cd4b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fd96085405361       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   3 seconds ago       Running             kube-proxy                2                   3475c79513645       kube-proxy-swjtc
	272475bcf7f1e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   a46faf2f6f4ad       storage-provisioner
	32986509583b5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   9972e3b359557       coredns-7c65d6cfc9-6vrmh
	464f7b2b1ec7f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   9f4fcc1c6b2e5       coredns-7c65d6cfc9-jf2d4
	3f82984a47b4c       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   6 seconds ago       Running             kube-apiserver            2                   9510431685e71       kube-apiserver-kubernetes-upgrade-980370
	39044a484e5d5       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   6 seconds ago       Running             kube-controller-manager   2                   bb321762b99e6       kube-controller-manager-kubernetes-upgrade-980370
	f1839bd7c9612       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 seconds ago      Exited              storage-provisioner       2                   a46faf2f6f4ad       storage-provisioner
	2dbc5f1169637       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   10 seconds ago      Running             kube-scheduler            2                   8429de2646620       kube-scheduler-kubernetes-upgrade-980370
	7d2f978875bd2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   10 seconds ago      Running             etcd                      2                   a2e3c861702e7       etcd-kubernetes-upgrade-980370
	203430ed08b32       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   23 seconds ago      Exited              coredns                   1                   9972e3b359557       coredns-7c65d6cfc9-6vrmh
	007ad2c6c41c5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   23 seconds ago      Exited              coredns                   1                   9f4fcc1c6b2e5       coredns-7c65d6cfc9-jf2d4
	e1a94e1132b51       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   27 seconds ago      Exited              etcd                      1                   f6572ec698a70       etcd-kubernetes-upgrade-980370
	f6798d1c0f291       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   27 seconds ago      Exited              kube-proxy                1                   730d7124c3a31       kube-proxy-swjtc
	42846fefaa158       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   27 seconds ago      Exited              kube-controller-manager   1                   9af0942e65761       kube-controller-manager-kubernetes-upgrade-980370
	128ec710f342a       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   27 seconds ago      Exited              kube-scheduler            1                   1ea4a603365f6       kube-scheduler-kubernetes-upgrade-980370
	795bd9144cd91       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   27 seconds ago      Exited              kube-apiserver            1                   66bf7139cde70       kube-apiserver-kubernetes-upgrade-980370
	
	
	==> coredns [007ad2c6c41c5d2d072a4a5943b3a8d847576ed0b3dcabb9435836e1df698f09] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [203430ed08b32739a19734e6c6b74b3a176e91e7c002ca6592a51cf481e64b5f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [32986509583b5140fd5adccf1ec1bb54f14fe6cce5dc7331db819bb602f40ebd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [464f7b2b1ec7f54606057b5dfcd96d6f881ed374479eeb80cda506844dc9c156] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-980370
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-980370
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Dec 2024 20:11:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-980370
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Dec 2024 20:11:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Dec 2024 20:11:44 +0000   Fri, 13 Dec 2024 20:11:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Dec 2024 20:11:44 +0000   Fri, 13 Dec 2024 20:11:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Dec 2024 20:11:44 +0000   Fri, 13 Dec 2024 20:11:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Dec 2024 20:11:44 +0000   Fri, 13 Dec 2024 20:11:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.131
	  Hostname:    kubernetes-upgrade-980370
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 152fe836488e4fbcb18d41002275ba93
	  System UUID:                152fe836-488e-4fbc-b18d-41002275ba93
	  Boot ID:                    f4ba7d8c-4239-490e-95a2-15c21d881ccc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-6vrmh                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     32s
	  kube-system                 coredns-7c65d6cfc9-jf2d4                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     32s
	  kube-system                 etcd-kubernetes-upgrade-980370                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         32s
	  kube-system                 kube-apiserver-kubernetes-upgrade-980370             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-980370    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-swjtc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-kubernetes-upgrade-980370             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 31s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s (x8 over 44s)  kubelet          Node kubernetes-upgrade-980370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 44s)  kubelet          Node kubernetes-upgrade-980370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x7 over 44s)  kubelet          Node kubernetes-upgrade-980370 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           33s                node-controller  Node kubernetes-upgrade-980370 event: Registered Node kubernetes-upgrade-980370 in Controller
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-980370 event: Registered Node kubernetes-upgrade-980370 in Controller
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.565360] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.064053] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070209] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.210719] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.161015] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.279407] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[Dec13 20:11] systemd-fstab-generator[721]: Ignoring "noauto" option for root device
	[  +2.078446] systemd-fstab-generator[843]: Ignoring "noauto" option for root device
	[  +0.058724] kauditd_printk_skb: 158 callbacks suppressed
	[ +11.780737] systemd-fstab-generator[1247]: Ignoring "noauto" option for root device
	[  +0.117777] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.976846] systemd-fstab-generator[2444]: Ignoring "noauto" option for root device
	[  +0.093391] kauditd_printk_skb: 137 callbacks suppressed
	[  +0.161688] systemd-fstab-generator[2528]: Ignoring "noauto" option for root device
	[  +0.531267] systemd-fstab-generator[2758]: Ignoring "noauto" option for root device
	[  +0.293723] systemd-fstab-generator[2888]: Ignoring "noauto" option for root device
	[  +0.606840] systemd-fstab-generator[2988]: Ignoring "noauto" option for root device
	[  +1.961386] systemd-fstab-generator[3994]: Ignoring "noauto" option for root device
	[ +11.447537] kauditd_printk_skb: 268 callbacks suppressed
	[  +5.374663] systemd-fstab-generator[4372]: Ignoring "noauto" option for root device
	[  +0.099492] kauditd_printk_skb: 7 callbacks suppressed
	[  +4.758463] systemd-fstab-generator[4873]: Ignoring "noauto" option for root device
	[  +1.807846] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [7d2f978875bd2efb49b939dd16305541f435c8a86f253f949d6604d2ce4e17e4] <==
	{"level":"info","ts":"2024-12-13T20:11:38.249329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e6d8b26c9b0c49 switched to configuration voters=(1794359762391600201)"}
	{"level":"info","ts":"2024-12-13T20:11:38.249399Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"86e8c9f2bcca8a81","local-member-id":"18e6d8b26c9b0c49","added-peer-id":"18e6d8b26c9b0c49","added-peer-peer-urls":["https://192.168.39.131:2380"]}
	{"level":"info","ts":"2024-12-13T20:11:38.249500Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"86e8c9f2bcca8a81","local-member-id":"18e6d8b26c9b0c49","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-13T20:11:38.249561Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-13T20:11:38.252117Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-13T20:11:38.252291Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"18e6d8b26c9b0c49","initial-advertise-peer-urls":["https://192.168.39.131:2380"],"listen-peer-urls":["https://192.168.39.131:2380"],"advertise-client-urls":["https://192.168.39.131:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.131:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-13T20:11:38.252309Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-13T20:11:38.252372Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.131:2380"}
	{"level":"info","ts":"2024-12-13T20:11:38.252379Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.131:2380"}
	{"level":"info","ts":"2024-12-13T20:11:40.021485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e6d8b26c9b0c49 is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-13T20:11:40.021533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e6d8b26c9b0c49 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-13T20:11:40.021562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e6d8b26c9b0c49 received MsgPreVoteResp from 18e6d8b26c9b0c49 at term 2"}
	{"level":"info","ts":"2024-12-13T20:11:40.021576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e6d8b26c9b0c49 became candidate at term 3"}
	{"level":"info","ts":"2024-12-13T20:11:40.021581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e6d8b26c9b0c49 received MsgVoteResp from 18e6d8b26c9b0c49 at term 3"}
	{"level":"info","ts":"2024-12-13T20:11:40.021589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e6d8b26c9b0c49 became leader at term 3"}
	{"level":"info","ts":"2024-12-13T20:11:40.021602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 18e6d8b26c9b0c49 elected leader 18e6d8b26c9b0c49 at term 3"}
	{"level":"info","ts":"2024-12-13T20:11:40.025022Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-13T20:11:40.025875Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-13T20:11:40.026577Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.131:2379"}
	{"level":"info","ts":"2024-12-13T20:11:40.026869Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-13T20:11:40.027446Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-13T20:11:40.028146Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-13T20:11:40.024975Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"18e6d8b26c9b0c49","local-member-attributes":"{Name:kubernetes-upgrade-980370 ClientURLs:[https://192.168.39.131:2379]}","request-path":"/0/members/18e6d8b26c9b0c49/attributes","cluster-id":"86e8c9f2bcca8a81","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-13T20:11:40.030785Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-13T20:11:40.030819Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [e1a94e1132b5157ee1f619dcb366b4b8fbd521a870b4b2259ca3955ef0b58723] <==
	{"level":"info","ts":"2024-12-13T20:11:21.945149Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-12-13T20:11:22.010203Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"86e8c9f2bcca8a81","local-member-id":"18e6d8b26c9b0c49","commit-index":371}
	{"level":"info","ts":"2024-12-13T20:11:22.010303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e6d8b26c9b0c49 switched to configuration voters=()"}
	{"level":"info","ts":"2024-12-13T20:11:22.010356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e6d8b26c9b0c49 became follower at term 2"}
	{"level":"info","ts":"2024-12-13T20:11:22.010365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 18e6d8b26c9b0c49 [peers: [], term: 2, commit: 371, applied: 0, lastindex: 371, lastterm: 2]"}
	{"level":"warn","ts":"2024-12-13T20:11:22.020859Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-12-13T20:11:22.056326Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":363}
	{"level":"info","ts":"2024-12-13T20:11:22.065897Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-12-13T20:11:22.073751Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"18e6d8b26c9b0c49","timeout":"7s"}
	{"level":"info","ts":"2024-12-13T20:11:22.074140Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"18e6d8b26c9b0c49"}
	{"level":"info","ts":"2024-12-13T20:11:22.074216Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"18e6d8b26c9b0c49","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-12-13T20:11:22.074558Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-12-13T20:11:22.074847Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-12-13T20:11:22.074912Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-12-13T20:11:22.076768Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-12-13T20:11:22.079990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e6d8b26c9b0c49 switched to configuration voters=(1794359762391600201)"}
	{"level":"info","ts":"2024-12-13T20:11:22.080218Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"86e8c9f2bcca8a81","local-member-id":"18e6d8b26c9b0c49","added-peer-id":"18e6d8b26c9b0c49","added-peer-peer-urls":["https://192.168.39.131:2380"]}
	{"level":"info","ts":"2024-12-13T20:11:22.080603Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"86e8c9f2bcca8a81","local-member-id":"18e6d8b26c9b0c49","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-13T20:11:22.080654Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-13T20:11:22.084034Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-13T20:11:22.088622Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-13T20:11:22.090667Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.131:2380"}
	{"level":"info","ts":"2024-12-13T20:11:22.091321Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.131:2380"}
	{"level":"info","ts":"2024-12-13T20:11:22.093150Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"18e6d8b26c9b0c49","initial-advertise-peer-urls":["https://192.168.39.131:2380"],"listen-peer-urls":["https://192.168.39.131:2380"],"advertise-client-urls":["https://192.168.39.131:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.131:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-13T20:11:22.094086Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> kernel <==
	 20:11:48 up 1 min,  0 users,  load average: 0.95, 0.27, 0.09
	Linux kubernetes-upgrade-980370 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3f82984a47b4cbf142292ae511dd86f6a8088c33775d64f6fe2e991c9a3d330a] <==
	I1213 20:11:44.083232       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 20:11:44.123444       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1213 20:11:44.123486       1 aggregator.go:171] initial CRD sync complete...
	I1213 20:11:44.123505       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 20:11:44.123511       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 20:11:44.123516       1 cache.go:39] Caches are synced for autoregister controller
	I1213 20:11:44.149802       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1213 20:11:44.153128       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 20:11:44.153281       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 20:11:44.153451       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 20:11:44.153769       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1213 20:11:44.159487       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1213 20:11:44.164114       1 shared_informer.go:320] Caches are synced for configmaps
	I1213 20:11:44.166493       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1213 20:11:44.166595       1 policy_source.go:224] refreshing policies
	E1213 20:11:44.167702       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1213 20:11:44.186640       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 20:11:44.988355       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 20:11:45.825260       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1213 20:11:45.836637       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1213 20:11:45.874309       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1213 20:11:45.915463       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 20:11:45.921973       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 20:11:47.712240       1 controller.go:615] quota admission added evaluator for: endpoints
	I1213 20:11:47.811466       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [795bd9144cd918a1ade296ccc92b66273974ad9169262af5b1e641bd222a9375] <==
	I1213 20:11:21.541435       1 options.go:228] external host was not specified, using 192.168.39.131
	I1213 20:11:21.605324       1 server.go:142] Version: v1.31.2
	I1213 20:11:21.605361       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1213 20:11:22.837364       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 20:11:22.837432       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1213 20:11:22.837508       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1213 20:11:22.849323       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	
	
	==> kube-controller-manager [39044a484e5d5382da10c9731d52c9c5d2bd2fe061e2130b8e79e6d5d799d562] <==
	I1213 20:11:47.487569       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1213 20:11:47.487575       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1213 20:11:47.487580       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1213 20:11:47.487873       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-980370"
	I1213 20:11:47.498393       1 shared_informer.go:320] Caches are synced for GC
	I1213 20:11:47.498369       1 shared_informer.go:320] Caches are synced for daemon sets
	I1213 20:11:47.499576       1 shared_informer.go:320] Caches are synced for taint
	I1213 20:11:47.499800       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 20:11:47.500155       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-980370"
	I1213 20:11:47.500345       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1213 20:11:47.508207       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1213 20:11:47.552351       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1213 20:11:47.552442       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-980370"
	I1213 20:11:47.564215       1 shared_informer.go:320] Caches are synced for cronjob
	I1213 20:11:47.598559       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1213 20:11:47.608026       1 shared_informer.go:320] Caches are synced for resource quota
	I1213 20:11:47.609220       1 shared_informer.go:320] Caches are synced for resource quota
	I1213 20:11:47.617961       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="219.585789ms"
	I1213 20:11:47.618320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.881µs"
	I1213 20:11:47.647848       1 shared_informer.go:320] Caches are synced for job
	I1213 20:11:48.033085       1 shared_informer.go:320] Caches are synced for garbage collector
	I1213 20:11:48.066681       1 shared_informer.go:320] Caches are synced for garbage collector
	I1213 20:11:48.066756       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 20:11:48.639669       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="18.725883ms"
	I1213 20:11:48.640186       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="87.122µs"
	
	
	==> kube-controller-manager [42846fefaa1584af20b23584f587a5a4d11045316821d5517b76aaef346bf322] <==
	
	
	==> kube-proxy [f6798d1c0f291ca950ee63c3946647f7d5b0c2e6e483f03cf7dd270f2d4dbccb] <==
	
	
	==> kube-proxy [fd96085405361fe29fa21ce1d9f58c66ebed8d84588f78fff6b53045972416c7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1213 20:11:45.107770       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1213 20:11:45.124008       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.131"]
	E1213 20:11:45.124185       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 20:11:45.166214       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1213 20:11:45.166254       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 20:11:45.166275       1 server_linux.go:169] "Using iptables Proxier"
	I1213 20:11:45.169312       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 20:11:45.169544       1 server.go:483] "Version info" version="v1.31.2"
	I1213 20:11:45.169574       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 20:11:45.171671       1 config.go:199] "Starting service config controller"
	I1213 20:11:45.171698       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1213 20:11:45.171749       1 config.go:105] "Starting endpoint slice config controller"
	I1213 20:11:45.171754       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1213 20:11:45.172068       1 config.go:328] "Starting node config controller"
	I1213 20:11:45.172106       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1213 20:11:45.272386       1 shared_informer.go:320] Caches are synced for node config
	I1213 20:11:45.272429       1 shared_informer.go:320] Caches are synced for service config
	I1213 20:11:45.272450       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [128ec710f342a563ce34ae78cbfcafee41318e744a709606ff0370081f95d6a9] <==
	
	
	==> kube-scheduler [2dbc5f11696379908016ed8b82dbd2724eba7101e79f966b6d5d9b5a62f94de8] <==
	W1213 20:11:42.133932       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.131:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.131:8443: connect: connection refused
	E1213 20:11:42.134028       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.131:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.131:8443: connect: connection refused" logger="UnhandledError"
	W1213 20:11:44.029547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1213 20:11:44.029682       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1213 20:11:44.029960       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1213 20:11:44.030003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 20:11:44.030199       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 20:11:44.030282       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 20:11:44.030654       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1213 20:11:44.030775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1213 20:11:44.031171       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1213 20:11:44.032828       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 20:11:44.031222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1213 20:11:44.033302       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 20:11:44.031231       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1213 20:11:44.033231       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1213 20:11:44.033907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1213 20:11:44.033979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 20:11:44.031328       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1213 20:11:44.034849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1213 20:11:44.031387       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1213 20:11:44.034951       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 20:11:44.031422       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 20:11:44.035016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1213 20:11:45.610243       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 13 20:11:41 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:41.669028    4379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/e8b2f1c95640f0fff9f118a2eb853997-etcd-data\") pod \"etcd-kubernetes-upgrade-980370\" (UID: \"e8b2f1c95640f0fff9f118a2eb853997\") " pod="kube-system/etcd-kubernetes-upgrade-980370"
	Dec 13 20:11:41 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:41.669192    4379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46072b83659953ee4a4aed8c92eecdc2-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-980370\" (UID: \"46072b83659953ee4a4aed8c92eecdc2\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-980370"
	Dec 13 20:11:41 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:41.669301    4379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/46072b83659953ee4a4aed8c92eecdc2-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-980370\" (UID: \"46072b83659953ee4a4aed8c92eecdc2\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-980370"
	Dec 13 20:11:41 kubernetes-upgrade-980370 kubelet[4379]: E1213 20:11:41.696444    4379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-980370?timeout=10s\": dial tcp 192.168.39.131:8443: connect: connection refused" interval="400ms"
	Dec 13 20:11:41 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:41.847206    4379 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-980370"
	Dec 13 20:11:41 kubernetes-upgrade-980370 kubelet[4379]: E1213 20:11:41.848240    4379 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.131:8443: connect: connection refused" node="kubernetes-upgrade-980370"
	Dec 13 20:11:41 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:41.912275    4379 scope.go:117] "RemoveContainer" containerID="42846fefaa1584af20b23584f587a5a4d11045316821d5517b76aaef346bf322"
	Dec 13 20:11:41 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:41.920137    4379 scope.go:117] "RemoveContainer" containerID="795bd9144cd918a1ade296ccc92b66273974ad9169262af5b1e641bd222a9375"
	Dec 13 20:11:42 kubernetes-upgrade-980370 kubelet[4379]: E1213 20:11:42.098306    4379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-980370?timeout=10s\": dial tcp 192.168.39.131:8443: connect: connection refused" interval="800ms"
	Dec 13 20:11:42 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:42.249916    4379 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-980370"
	Dec 13 20:11:44 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:44.198324    4379 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-980370"
	Dec 13 20:11:44 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:44.198772    4379 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-980370"
	Dec 13 20:11:44 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:44.198882    4379 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 13 20:11:44 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:44.199914    4379 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 13 20:11:44 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:44.440652    4379 apiserver.go:52] "Watching apiserver"
	Dec 13 20:11:44 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:44.459406    4379 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 13 20:11:44 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:44.536102    4379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5d07f93-7de7-44f5-86a4-2f7477b820ab-xtables-lock\") pod \"kube-proxy-swjtc\" (UID: \"f5d07f93-7de7-44f5-86a4-2f7477b820ab\") " pod="kube-system/kube-proxy-swjtc"
	Dec 13 20:11:44 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:44.536172    4379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c0e9a8ae-fbe1-412c-ab99-d1b73c33c98a-tmp\") pod \"storage-provisioner\" (UID: \"c0e9a8ae-fbe1-412c-ab99-d1b73c33c98a\") " pod="kube-system/storage-provisioner"
	Dec 13 20:11:44 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:44.536227    4379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5d07f93-7de7-44f5-86a4-2f7477b820ab-lib-modules\") pod \"kube-proxy-swjtc\" (UID: \"f5d07f93-7de7-44f5-86a4-2f7477b820ab\") " pod="kube-system/kube-proxy-swjtc"
	Dec 13 20:11:44 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:44.745351    4379 scope.go:117] "RemoveContainer" containerID="203430ed08b32739a19734e6c6b74b3a176e91e7c002ca6592a51cf481e64b5f"
	Dec 13 20:11:44 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:44.745891    4379 scope.go:117] "RemoveContainer" containerID="007ad2c6c41c5d2d072a4a5943b3a8d847576ed0b3dcabb9435836e1df698f09"
	Dec 13 20:11:44 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:44.746208    4379 scope.go:117] "RemoveContainer" containerID="f6798d1c0f291ca950ee63c3946647f7d5b0c2e6e483f03cf7dd270f2d4dbccb"
	Dec 13 20:11:44 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:44.746395    4379 scope.go:117] "RemoveContainer" containerID="f1839bd7c9612f1cf59e538cf5f59494c4c7b8390c8cd5e7ec79ef6565d165b1"
	Dec 13 20:11:47 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:47.262104    4379 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 13 20:11:48 kubernetes-upgrade-980370 kubelet[4379]: I1213 20:11:48.594114    4379 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [272475bcf7f1e4c8e8dbe04ab4d3a3270360eef6fbd54befbbc1d633121dc96b] <==
	I1213 20:11:44.997022       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 20:11:45.015039       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 20:11:45.016039       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [f1839bd7c9612f1cf59e538cf5f59494c4c7b8390c8cd5e7ec79ef6565d165b1] <==
	I1213 20:11:38.231657       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 20:11:38.233420       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-980370 -n kubernetes-upgrade-980370
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-980370 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-980370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-980370
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-980370: (1.268861429s)
--- FAIL: TestKubernetesUpgrade (375.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (300.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-613355 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-613355 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m0.341103488s)

                                                
                                                
-- stdout --
	* [old-k8s-version-613355] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-613355" primary control-plane node in "old-k8s-version-613355" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 20:13:50.645726   69781 out.go:345] Setting OutFile to fd 1 ...
	I1213 20:13:50.646075   69781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:13:50.646093   69781 out.go:358] Setting ErrFile to fd 2...
	I1213 20:13:50.646101   69781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:13:50.646520   69781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
	I1213 20:13:50.647404   69781 out.go:352] Setting JSON to false
	I1213 20:13:50.648412   69781 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6974,"bootTime":1734113857,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 20:13:50.648510   69781 start.go:139] virtualization: kvm guest
	I1213 20:13:50.650178   69781 out.go:177] * [old-k8s-version-613355] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 20:13:50.651331   69781 notify.go:220] Checking for updates...
	I1213 20:13:50.651336   69781 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 20:13:50.652521   69781 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 20:13:50.653493   69781 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:13:50.654528   69781 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 20:13:50.655566   69781 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 20:13:50.656504   69781 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 20:13:50.657840   69781 config.go:182] Loaded profile config "bridge-918860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:13:50.657938   69781 config.go:182] Loaded profile config "enable-default-cni-918860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:13:50.658013   69781 config.go:182] Loaded profile config "flannel-918860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:13:50.658083   69781 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 20:13:50.693750   69781 out.go:177] * Using the kvm2 driver based on user configuration
	I1213 20:13:50.694957   69781 start.go:297] selected driver: kvm2
	I1213 20:13:50.694972   69781 start.go:901] validating driver "kvm2" against <nil>
	I1213 20:13:50.694982   69781 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 20:13:50.695627   69781 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 20:13:50.695702   69781 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20090-12353/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 20:13:50.710430   69781 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1213 20:13:50.710502   69781 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 20:13:50.710742   69781 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 20:13:50.710771   69781 cni.go:84] Creating CNI manager for ""
	I1213 20:13:50.710813   69781 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:13:50.710821   69781 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 20:13:50.710901   69781 start.go:340] cluster config:
	{Name:old-k8s-version-613355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-613355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:13:50.710997   69781 iso.go:125] acquiring lock: {Name:mkd84f6661a5214d8c2d3a40ad448351a88bfd1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 20:13:50.712556   69781 out.go:177] * Starting "old-k8s-version-613355" primary control-plane node in "old-k8s-version-613355" cluster
	I1213 20:13:50.713572   69781 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1213 20:13:50.713608   69781 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1213 20:13:50.713616   69781 cache.go:56] Caching tarball of preloaded images
	I1213 20:13:50.713706   69781 preload.go:172] Found /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 20:13:50.713719   69781 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1213 20:13:50.713802   69781 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/config.json ...
	I1213 20:13:50.713818   69781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/config.json: {Name:mk8fac007a5a42ed0ca41d6a4b1848eefc6dc864 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:13:50.713965   69781 start.go:360] acquireMachinesLock for old-k8s-version-613355: {Name:mkc278ae0927dbec7538ca4f7c13001e5f3abc49 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 20:14:20.307479   69781 start.go:364] duration metric: took 29.593491992s to acquireMachinesLock for "old-k8s-version-613355"
	I1213 20:14:20.307553   69781 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-613355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-613355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 20:14:20.307675   69781 start.go:125] createHost starting for "" (driver="kvm2")
	I1213 20:14:20.309167   69781 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1213 20:14:20.309367   69781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:14:20.309423   69781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:14:20.326388   69781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43187
	I1213 20:14:20.326935   69781 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:14:20.327629   69781 main.go:141] libmachine: Using API Version  1
	I1213 20:14:20.327655   69781 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:14:20.328008   69781 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:14:20.328219   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetMachineName
	I1213 20:14:20.328373   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .DriverName
	I1213 20:14:20.328552   69781 start.go:159] libmachine.API.Create for "old-k8s-version-613355" (driver="kvm2")
	I1213 20:14:20.328583   69781 client.go:168] LocalClient.Create starting
	I1213 20:14:20.328616   69781 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem
	I1213 20:14:20.328650   69781 main.go:141] libmachine: Decoding PEM data...
	I1213 20:14:20.328668   69781 main.go:141] libmachine: Parsing certificate...
	I1213 20:14:20.328727   69781 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem
	I1213 20:14:20.328751   69781 main.go:141] libmachine: Decoding PEM data...
	I1213 20:14:20.328767   69781 main.go:141] libmachine: Parsing certificate...
	I1213 20:14:20.328789   69781 main.go:141] libmachine: Running pre-create checks...
	I1213 20:14:20.328801   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .PreCreateCheck
	I1213 20:14:20.329230   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetConfigRaw
	I1213 20:14:20.329663   69781 main.go:141] libmachine: Creating machine...
	I1213 20:14:20.329680   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .Create
	I1213 20:14:20.329809   69781 main.go:141] libmachine: (old-k8s-version-613355) creating KVM machine...
	I1213 20:14:20.329828   69781 main.go:141] libmachine: (old-k8s-version-613355) creating network...
	I1213 20:14:20.331092   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | found existing default KVM network
	I1213 20:14:20.332643   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:20.332488   70243 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:40:32:bb} reservation:<nil>}
	I1213 20:14:20.333553   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:20.333461   70243 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:0c:69:ea} reservation:<nil>}
	I1213 20:14:20.334409   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:20.334311   70243 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:d5:ad:f8} reservation:<nil>}
	I1213 20:14:20.335419   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:20.335355   70243 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00033f770}
	I1213 20:14:20.335484   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | created network xml: 
	I1213 20:14:20.335508   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | <network>
	I1213 20:14:20.335544   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG |   <name>mk-old-k8s-version-613355</name>
	I1213 20:14:20.335565   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG |   <dns enable='no'/>
	I1213 20:14:20.335572   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG |   
	I1213 20:14:20.335582   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1213 20:14:20.335588   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG |     <dhcp>
	I1213 20:14:20.335599   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1213 20:14:20.335612   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG |     </dhcp>
	I1213 20:14:20.335619   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG |   </ip>
	I1213 20:14:20.335641   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG |   
	I1213 20:14:20.335653   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | </network>
	I1213 20:14:20.335662   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | 
	I1213 20:14:20.340606   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | trying to create private KVM network mk-old-k8s-version-613355 192.168.72.0/24...
	I1213 20:14:20.410933   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | private KVM network mk-old-k8s-version-613355 192.168.72.0/24 created
	I1213 20:14:20.410976   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:20.410912   70243 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 20:14:20.410994   69781 main.go:141] libmachine: (old-k8s-version-613355) setting up store path in /home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355 ...
	I1213 20:14:20.411012   69781 main.go:141] libmachine: (old-k8s-version-613355) building disk image from file:///home/jenkins/minikube-integration/20090-12353/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso
	I1213 20:14:20.411034   69781 main.go:141] libmachine: (old-k8s-version-613355) Downloading /home/jenkins/minikube-integration/20090-12353/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20090-12353/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso...
	I1213 20:14:20.661548   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:20.661431   70243 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355/id_rsa...
	I1213 20:14:20.733157   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:20.733039   70243 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355/old-k8s-version-613355.rawdisk...
	I1213 20:14:20.733188   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | Writing magic tar header
	I1213 20:14:20.733205   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | Writing SSH key tar header
	I1213 20:14:20.733218   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:20.733155   70243 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355 ...
	I1213 20:14:20.733301   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355
	I1213 20:14:20.733332   69781 main.go:141] libmachine: (old-k8s-version-613355) setting executable bit set on /home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355 (perms=drwx------)
	I1213 20:14:20.733344   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20090-12353/.minikube/machines
	I1213 20:14:20.733359   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 20:14:20.733370   69781 main.go:141] libmachine: (old-k8s-version-613355) setting executable bit set on /home/jenkins/minikube-integration/20090-12353/.minikube/machines (perms=drwxr-xr-x)
	I1213 20:14:20.733380   69781 main.go:141] libmachine: (old-k8s-version-613355) setting executable bit set on /home/jenkins/minikube-integration/20090-12353/.minikube (perms=drwxr-xr-x)
	I1213 20:14:20.733386   69781 main.go:141] libmachine: (old-k8s-version-613355) setting executable bit set on /home/jenkins/minikube-integration/20090-12353 (perms=drwxrwxr-x)
	I1213 20:14:20.733410   69781 main.go:141] libmachine: (old-k8s-version-613355) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1213 20:14:20.733423   69781 main.go:141] libmachine: (old-k8s-version-613355) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1213 20:14:20.733433   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20090-12353
	I1213 20:14:20.733448   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1213 20:14:20.733457   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | checking permissions on dir: /home/jenkins
	I1213 20:14:20.733465   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | checking permissions on dir: /home
	I1213 20:14:20.733470   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | skipping /home - not owner
	I1213 20:14:20.733484   69781 main.go:141] libmachine: (old-k8s-version-613355) creating domain...
	I1213 20:14:20.734528   69781 main.go:141] libmachine: (old-k8s-version-613355) define libvirt domain using xml: 
	I1213 20:14:20.734547   69781 main.go:141] libmachine: (old-k8s-version-613355) <domain type='kvm'>
	I1213 20:14:20.734553   69781 main.go:141] libmachine: (old-k8s-version-613355)   <name>old-k8s-version-613355</name>
	I1213 20:14:20.734558   69781 main.go:141] libmachine: (old-k8s-version-613355)   <memory unit='MiB'>2200</memory>
	I1213 20:14:20.734563   69781 main.go:141] libmachine: (old-k8s-version-613355)   <vcpu>2</vcpu>
	I1213 20:14:20.734580   69781 main.go:141] libmachine: (old-k8s-version-613355)   <features>
	I1213 20:14:20.734586   69781 main.go:141] libmachine: (old-k8s-version-613355)     <acpi/>
	I1213 20:14:20.734593   69781 main.go:141] libmachine: (old-k8s-version-613355)     <apic/>
	I1213 20:14:20.734605   69781 main.go:141] libmachine: (old-k8s-version-613355)     <pae/>
	I1213 20:14:20.734613   69781 main.go:141] libmachine: (old-k8s-version-613355)     
	I1213 20:14:20.734622   69781 main.go:141] libmachine: (old-k8s-version-613355)   </features>
	I1213 20:14:20.734632   69781 main.go:141] libmachine: (old-k8s-version-613355)   <cpu mode='host-passthrough'>
	I1213 20:14:20.734637   69781 main.go:141] libmachine: (old-k8s-version-613355)   
	I1213 20:14:20.734644   69781 main.go:141] libmachine: (old-k8s-version-613355)   </cpu>
	I1213 20:14:20.734649   69781 main.go:141] libmachine: (old-k8s-version-613355)   <os>
	I1213 20:14:20.734657   69781 main.go:141] libmachine: (old-k8s-version-613355)     <type>hvm</type>
	I1213 20:14:20.734663   69781 main.go:141] libmachine: (old-k8s-version-613355)     <boot dev='cdrom'/>
	I1213 20:14:20.734672   69781 main.go:141] libmachine: (old-k8s-version-613355)     <boot dev='hd'/>
	I1213 20:14:20.734679   69781 main.go:141] libmachine: (old-k8s-version-613355)     <bootmenu enable='no'/>
	I1213 20:14:20.734694   69781 main.go:141] libmachine: (old-k8s-version-613355)   </os>
	I1213 20:14:20.734764   69781 main.go:141] libmachine: (old-k8s-version-613355)   <devices>
	I1213 20:14:20.734799   69781 main.go:141] libmachine: (old-k8s-version-613355)     <disk type='file' device='cdrom'>
	I1213 20:14:20.734820   69781 main.go:141] libmachine: (old-k8s-version-613355)       <source file='/home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355/boot2docker.iso'/>
	I1213 20:14:20.734836   69781 main.go:141] libmachine: (old-k8s-version-613355)       <target dev='hdc' bus='scsi'/>
	I1213 20:14:20.734873   69781 main.go:141] libmachine: (old-k8s-version-613355)       <readonly/>
	I1213 20:14:20.734888   69781 main.go:141] libmachine: (old-k8s-version-613355)     </disk>
	I1213 20:14:20.734895   69781 main.go:141] libmachine: (old-k8s-version-613355)     <disk type='file' device='disk'>
	I1213 20:14:20.734916   69781 main.go:141] libmachine: (old-k8s-version-613355)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1213 20:14:20.734929   69781 main.go:141] libmachine: (old-k8s-version-613355)       <source file='/home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355/old-k8s-version-613355.rawdisk'/>
	I1213 20:14:20.734937   69781 main.go:141] libmachine: (old-k8s-version-613355)       <target dev='hda' bus='virtio'/>
	I1213 20:14:20.734942   69781 main.go:141] libmachine: (old-k8s-version-613355)     </disk>
	I1213 20:14:20.734949   69781 main.go:141] libmachine: (old-k8s-version-613355)     <interface type='network'>
	I1213 20:14:20.734955   69781 main.go:141] libmachine: (old-k8s-version-613355)       <source network='mk-old-k8s-version-613355'/>
	I1213 20:14:20.734962   69781 main.go:141] libmachine: (old-k8s-version-613355)       <model type='virtio'/>
	I1213 20:14:20.734967   69781 main.go:141] libmachine: (old-k8s-version-613355)     </interface>
	I1213 20:14:20.734972   69781 main.go:141] libmachine: (old-k8s-version-613355)     <interface type='network'>
	I1213 20:14:20.734978   69781 main.go:141] libmachine: (old-k8s-version-613355)       <source network='default'/>
	I1213 20:14:20.734986   69781 main.go:141] libmachine: (old-k8s-version-613355)       <model type='virtio'/>
	I1213 20:14:20.735000   69781 main.go:141] libmachine: (old-k8s-version-613355)     </interface>
	I1213 20:14:20.735016   69781 main.go:141] libmachine: (old-k8s-version-613355)     <serial type='pty'>
	I1213 20:14:20.735029   69781 main.go:141] libmachine: (old-k8s-version-613355)       <target port='0'/>
	I1213 20:14:20.735040   69781 main.go:141] libmachine: (old-k8s-version-613355)     </serial>
	I1213 20:14:20.735058   69781 main.go:141] libmachine: (old-k8s-version-613355)     <console type='pty'>
	I1213 20:14:20.735066   69781 main.go:141] libmachine: (old-k8s-version-613355)       <target type='serial' port='0'/>
	I1213 20:14:20.735082   69781 main.go:141] libmachine: (old-k8s-version-613355)     </console>
	I1213 20:14:20.735098   69781 main.go:141] libmachine: (old-k8s-version-613355)     <rng model='virtio'>
	I1213 20:14:20.735111   69781 main.go:141] libmachine: (old-k8s-version-613355)       <backend model='random'>/dev/random</backend>
	I1213 20:14:20.735122   69781 main.go:141] libmachine: (old-k8s-version-613355)     </rng>
	I1213 20:14:20.735134   69781 main.go:141] libmachine: (old-k8s-version-613355)     
	I1213 20:14:20.735144   69781 main.go:141] libmachine: (old-k8s-version-613355)     
	I1213 20:14:20.735152   69781 main.go:141] libmachine: (old-k8s-version-613355)   </devices>
	I1213 20:14:20.735161   69781 main.go:141] libmachine: (old-k8s-version-613355) </domain>
	I1213 20:14:20.735179   69781 main.go:141] libmachine: (old-k8s-version-613355) 
	I1213 20:14:20.741763   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:99:35:cb in network default
	I1213 20:14:20.742363   69781 main.go:141] libmachine: (old-k8s-version-613355) starting domain...
	I1213 20:14:20.742381   69781 main.go:141] libmachine: (old-k8s-version-613355) ensuring networks are active...
	I1213 20:14:20.742401   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:20.743076   69781 main.go:141] libmachine: (old-k8s-version-613355) Ensuring network default is active
	I1213 20:14:20.743395   69781 main.go:141] libmachine: (old-k8s-version-613355) Ensuring network mk-old-k8s-version-613355 is active
	I1213 20:14:20.743871   69781 main.go:141] libmachine: (old-k8s-version-613355) getting domain XML...
	I1213 20:14:20.744631   69781 main.go:141] libmachine: (old-k8s-version-613355) creating domain...
	I1213 20:14:22.163972   69781 main.go:141] libmachine: (old-k8s-version-613355) waiting for IP...
	I1213 20:14:22.164888   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:22.165404   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:14:22.165442   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:22.165395   70243 retry.go:31] will retry after 301.907509ms: waiting for domain to come up
	I1213 20:14:22.469268   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:22.469995   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:14:22.470038   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:22.469961   70243 retry.go:31] will retry after 304.728508ms: waiting for domain to come up
	I1213 20:14:22.776624   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:22.777395   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:14:22.777426   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:22.777297   70243 retry.go:31] will retry after 414.892936ms: waiting for domain to come up
	I1213 20:14:23.194065   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:23.194709   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:14:23.194748   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:23.194674   70243 retry.go:31] will retry after 554.991892ms: waiting for domain to come up
	I1213 20:14:23.751706   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:23.752395   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:14:23.752423   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:23.752202   70243 retry.go:31] will retry after 646.215519ms: waiting for domain to come up
	I1213 20:14:24.400116   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:24.400790   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:14:24.400816   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:24.400754   70243 retry.go:31] will retry after 708.137637ms: waiting for domain to come up
	I1213 20:14:25.110260   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:25.110789   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:14:25.110821   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:25.110780   70243 retry.go:31] will retry after 770.601947ms: waiting for domain to come up
	I1213 20:14:25.883075   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:25.883606   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:14:25.883639   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:25.883571   70243 retry.go:31] will retry after 1.423420425s: waiting for domain to come up
	I1213 20:14:27.309078   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:27.309626   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:14:27.309656   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:27.309594   70243 retry.go:31] will retry after 1.61608164s: waiting for domain to come up
	I1213 20:14:28.927970   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:28.928428   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:14:28.928452   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:28.928417   70243 retry.go:31] will retry after 1.968447499s: waiting for domain to come up
	I1213 20:14:30.898890   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:30.899450   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:14:30.899507   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:30.899412   70243 retry.go:31] will retry after 2.582547448s: waiting for domain to come up
	I1213 20:14:33.483811   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:33.484257   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:14:33.484294   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:33.484215   70243 retry.go:31] will retry after 3.05474133s: waiting for domain to come up
	I1213 20:14:36.540718   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:36.541342   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:14:36.541375   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:36.541291   70243 retry.go:31] will retry after 3.291856231s: waiting for domain to come up
	I1213 20:14:39.836294   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:39.836878   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:14:39.836908   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:14:39.836825   70243 retry.go:31] will retry after 5.052554521s: waiting for domain to come up
	I1213 20:14:44.891650   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:44.892145   69781 main.go:141] libmachine: (old-k8s-version-613355) found domain IP: 192.168.72.134
	I1213 20:14:44.892176   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has current primary IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:44.892184   69781 main.go:141] libmachine: (old-k8s-version-613355) reserving static IP address...
	I1213 20:14:44.892511   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-613355", mac: "52:54:00:d3:40:ab", ip: "192.168.72.134"} in network mk-old-k8s-version-613355
	I1213 20:14:44.971013   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | Getting to WaitForSSH function...
	I1213 20:14:44.971043   69781 main.go:141] libmachine: (old-k8s-version-613355) reserved static IP address 192.168.72.134 for domain old-k8s-version-613355
	I1213 20:14:44.971057   69781 main.go:141] libmachine: (old-k8s-version-613355) waiting for SSH...
	I1213 20:14:44.974286   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:44.974722   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:14:35 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d3:40:ab}
	I1213 20:14:44.974763   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:44.975259   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | Using SSH client type: external
	I1213 20:14:44.975304   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | Using SSH private key: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355/id_rsa (-rw-------)
	I1213 20:14:44.975338   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 20:14:44.975353   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | About to run SSH command:
	I1213 20:14:44.975368   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | exit 0
	I1213 20:14:45.106613   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | SSH cmd err, output: <nil>: 
	I1213 20:14:45.106922   69781 main.go:141] libmachine: (old-k8s-version-613355) KVM machine creation complete
	I1213 20:14:45.107237   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetConfigRaw
	I1213 20:14:45.107809   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .DriverName
	I1213 20:14:45.108006   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .DriverName
	I1213 20:14:45.108172   69781 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1213 20:14:45.108188   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetState
	I1213 20:14:45.109605   69781 main.go:141] libmachine: Detecting operating system of created instance...
	I1213 20:14:45.109622   69781 main.go:141] libmachine: Waiting for SSH to be available...
	I1213 20:14:45.109629   69781 main.go:141] libmachine: Getting to WaitForSSH function...
	I1213 20:14:45.109639   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:14:45.112464   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:45.112821   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:14:35 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:14:45.112852   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:45.113011   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHPort
	I1213 20:14:45.113180   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:14:45.113350   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:14:45.113466   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHUsername
	I1213 20:14:45.113652   69781 main.go:141] libmachine: Using SSH client type: native
	I1213 20:14:45.113891   69781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1213 20:14:45.113907   69781 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1213 20:14:45.226414   69781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 20:14:45.226445   69781 main.go:141] libmachine: Detecting the provisioner...
	I1213 20:14:45.226457   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:14:45.230099   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:45.230563   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:14:35 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:14:45.230594   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:45.230732   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHPort
	I1213 20:14:45.230993   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:14:45.231159   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:14:45.231345   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHUsername
	I1213 20:14:45.231679   69781 main.go:141] libmachine: Using SSH client type: native
	I1213 20:14:45.231879   69781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1213 20:14:45.231904   69781 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1213 20:14:45.348589   69781 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1213 20:14:45.348666   69781 main.go:141] libmachine: found compatible host: buildroot
	I1213 20:14:45.348676   69781 main.go:141] libmachine: Provisioning with buildroot...
	I1213 20:14:45.348686   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetMachineName
	I1213 20:14:45.348920   69781 buildroot.go:166] provisioning hostname "old-k8s-version-613355"
	I1213 20:14:45.348950   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetMachineName
	I1213 20:14:45.349181   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:14:45.352052   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:45.352446   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:14:35 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:14:45.352486   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:45.352610   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHPort
	I1213 20:14:45.352785   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:14:45.352957   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:14:45.353121   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHUsername
	I1213 20:14:45.353296   69781 main.go:141] libmachine: Using SSH client type: native
	I1213 20:14:45.353499   69781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1213 20:14:45.353512   69781 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-613355 && echo "old-k8s-version-613355" | sudo tee /etc/hostname
	I1213 20:14:45.481783   69781 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-613355
	
	I1213 20:14:45.481827   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:14:45.484227   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:45.484585   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:14:35 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:14:45.484616   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:45.484782   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHPort
	I1213 20:14:45.484957   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:14:45.485104   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:14:45.485242   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHUsername
	I1213 20:14:45.485395   69781 main.go:141] libmachine: Using SSH client type: native
	I1213 20:14:45.485591   69781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1213 20:14:45.485610   69781 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-613355' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-613355/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-613355' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 20:14:45.608757   69781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 20:14:45.608807   69781 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20090-12353/.minikube CaCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20090-12353/.minikube}
	I1213 20:14:45.608859   69781 buildroot.go:174] setting up certificates
	I1213 20:14:45.608877   69781 provision.go:84] configureAuth start
	I1213 20:14:45.608897   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetMachineName
	I1213 20:14:45.609201   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetIP
	I1213 20:14:45.612276   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:45.612666   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:14:35 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:14:45.612694   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:45.612848   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:14:45.615469   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:45.615836   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:14:35 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:14:45.615869   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:45.615953   69781 provision.go:143] copyHostCerts
	I1213 20:14:45.616016   69781 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem, removing ...
	I1213 20:14:45.616029   69781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem
	I1213 20:14:45.616097   69781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem (1082 bytes)
	I1213 20:14:45.616192   69781 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem, removing ...
	I1213 20:14:45.616203   69781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem
	I1213 20:14:45.616233   69781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem (1123 bytes)
	I1213 20:14:45.616296   69781 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem, removing ...
	I1213 20:14:45.616308   69781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem
	I1213 20:14:45.616337   69781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem (1675 bytes)
	I1213 20:14:45.616397   69781 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-613355 san=[127.0.0.1 192.168.72.134 localhost minikube old-k8s-version-613355]
	I1213 20:14:45.660918   69781 provision.go:177] copyRemoteCerts
	I1213 20:14:45.660965   69781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 20:14:45.660991   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:14:45.664108   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:45.664447   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:14:35 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:14:45.664474   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:45.664675   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHPort
	I1213 20:14:45.664864   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:14:45.665021   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHUsername
	I1213 20:14:45.665153   69781 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355/id_rsa Username:docker}
	I1213 20:14:45.748864   69781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 20:14:45.773308   69781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1213 20:14:45.795801   69781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 20:14:45.820540   69781 provision.go:87] duration metric: took 211.64834ms to configureAuth
	I1213 20:14:45.820565   69781 buildroot.go:189] setting minikube options for container-runtime
	I1213 20:14:45.820726   69781 config.go:182] Loaded profile config "old-k8s-version-613355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1213 20:14:45.820804   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:14:45.823680   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:45.824055   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:14:35 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:14:45.824087   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:45.824229   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHPort
	I1213 20:14:45.824412   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:14:45.824593   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:14:45.824766   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHUsername
	I1213 20:14:45.824957   69781 main.go:141] libmachine: Using SSH client type: native
	I1213 20:14:45.825174   69781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1213 20:14:45.825194   69781 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 20:14:46.082284   69781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 20:14:46.082313   69781 main.go:141] libmachine: Checking connection to Docker...
	I1213 20:14:46.082324   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetURL
	I1213 20:14:46.083767   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | using libvirt version 6000000
	I1213 20:14:46.086472   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:46.086855   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:14:35 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:14:46.086876   69781 main.go:141] libmachine: Docker is up and running!
	I1213 20:14:46.086891   69781 main.go:141] libmachine: Reticulating splines...
	I1213 20:14:46.086899   69781 client.go:171] duration metric: took 25.758308181s to LocalClient.Create
	I1213 20:14:46.086901   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:46.086914   69781 start.go:167] duration metric: took 25.758365286s to libmachine.API.Create "old-k8s-version-613355"
	I1213 20:14:46.086921   69781 start.go:293] postStartSetup for "old-k8s-version-613355" (driver="kvm2")
	I1213 20:14:46.086929   69781 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 20:14:46.086942   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .DriverName
	I1213 20:14:46.087193   69781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 20:14:46.087211   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:14:46.089807   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:46.090249   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:14:35 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:14:46.090280   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:46.090513   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHPort
	I1213 20:14:46.090694   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:14:46.090886   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHUsername
	I1213 20:14:46.091017   69781 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355/id_rsa Username:docker}
	I1213 20:14:46.177062   69781 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 20:14:46.181267   69781 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 20:14:46.181294   69781 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-12353/.minikube/addons for local assets ...
	I1213 20:14:46.181337   69781 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-12353/.minikube/files for local assets ...
	I1213 20:14:46.181412   69781 filesync.go:149] local asset: /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem -> 195442.pem in /etc/ssl/certs
	I1213 20:14:46.181512   69781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 20:14:46.191306   69781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem --> /etc/ssl/certs/195442.pem (1708 bytes)
	I1213 20:14:46.214828   69781 start.go:296] duration metric: took 127.896551ms for postStartSetup
	I1213 20:14:46.214891   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetConfigRaw
	I1213 20:14:46.215418   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetIP
	I1213 20:14:46.217785   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:46.218133   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:14:35 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:14:46.218159   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:46.218391   69781 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/config.json ...
	I1213 20:14:46.218589   69781 start.go:128] duration metric: took 25.910903107s to createHost
	I1213 20:14:46.218610   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:14:46.220684   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:46.220984   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:14:35 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:14:46.221010   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:46.221156   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHPort
	I1213 20:14:46.221364   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:14:46.221544   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:14:46.221662   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHUsername
	I1213 20:14:46.221850   69781 main.go:141] libmachine: Using SSH client type: native
	I1213 20:14:46.222054   69781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1213 20:14:46.222069   69781 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 20:14:46.331799   69781 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734120886.318380271
	
	I1213 20:14:46.331820   69781 fix.go:216] guest clock: 1734120886.318380271
	I1213 20:14:46.331827   69781 fix.go:229] Guest: 2024-12-13 20:14:46.318380271 +0000 UTC Remote: 2024-12-13 20:14:46.218601003 +0000 UTC m=+55.608204193 (delta=99.779268ms)
	I1213 20:14:46.331843   69781 fix.go:200] guest clock delta is within tolerance: 99.779268ms
	I1213 20:14:46.331848   69781 start.go:83] releasing machines lock for "old-k8s-version-613355", held for 26.024331345s
	I1213 20:14:46.331862   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .DriverName
	I1213 20:14:46.332058   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetIP
	I1213 20:14:46.334711   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:46.335190   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:14:35 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:14:46.335216   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:46.335369   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .DriverName
	I1213 20:14:46.335917   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .DriverName
	I1213 20:14:46.336088   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .DriverName
	I1213 20:14:46.336190   69781 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 20:14:46.336228   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:14:46.336328   69781 ssh_runner.go:195] Run: cat /version.json
	I1213 20:14:46.336368   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:14:46.339302   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:46.339466   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:46.339621   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:14:35 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:14:46.339642   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:46.339860   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:14:35 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:14:46.339885   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:46.339956   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHPort
	I1213 20:14:46.340042   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHPort
	I1213 20:14:46.340196   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:14:46.340231   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:14:46.340395   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHUsername
	I1213 20:14:46.340404   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHUsername
	I1213 20:14:46.340549   69781 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355/id_rsa Username:docker}
	I1213 20:14:46.340551   69781 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355/id_rsa Username:docker}
	I1213 20:14:46.456434   69781 ssh_runner.go:195] Run: systemctl --version
	I1213 20:14:46.462401   69781 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 20:14:46.620892   69781 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 20:14:46.627157   69781 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 20:14:46.627236   69781 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 20:14:46.642201   69781 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 20:14:46.642223   69781 start.go:495] detecting cgroup driver to use...
	I1213 20:14:46.642270   69781 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 20:14:46.659547   69781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 20:14:46.674449   69781 docker.go:217] disabling cri-docker service (if available) ...
	I1213 20:14:46.674504   69781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 20:14:46.688176   69781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 20:14:46.701979   69781 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 20:14:46.851325   69781 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 20:14:47.042355   69781 docker.go:233] disabling docker service ...
	I1213 20:14:47.042409   69781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 20:14:47.056038   69781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 20:14:47.068119   69781 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 20:14:47.201545   69781 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 20:14:47.347917   69781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 20:14:47.363009   69781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 20:14:47.384003   69781 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1213 20:14:47.384165   69781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:14:47.395659   69781 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 20:14:47.395725   69781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:14:47.405877   69781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:14:47.417154   69781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:14:47.428028   69781 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 20:14:47.439128   69781 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 20:14:47.448700   69781 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 20:14:47.448754   69781 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 20:14:47.461823   69781 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 20:14:47.471794   69781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:14:47.599775   69781 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 20:14:47.718838   69781 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 20:14:47.718921   69781 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 20:14:47.725519   69781 start.go:563] Will wait 60s for crictl version
	I1213 20:14:47.725578   69781 ssh_runner.go:195] Run: which crictl
	I1213 20:14:47.730277   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 20:14:47.776587   69781 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 20:14:47.776662   69781 ssh_runner.go:195] Run: crio --version
	I1213 20:14:47.803431   69781 ssh_runner.go:195] Run: crio --version
	I1213 20:14:47.835428   69781 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1213 20:14:47.836675   69781 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetIP
	I1213 20:14:47.840409   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:47.840738   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:14:35 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:14:47.840766   69781 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:14:47.841013   69781 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1213 20:14:47.845430   69781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 20:14:47.861656   69781 kubeadm.go:883] updating cluster {Name:old-k8s-version-613355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-613355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 20:14:47.861796   69781 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1213 20:14:47.861855   69781 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 20:14:47.899204   69781 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1213 20:14:47.899269   69781 ssh_runner.go:195] Run: which lz4
	I1213 20:14:47.903211   69781 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 20:14:47.908706   69781 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 20:14:47.908739   69781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1213 20:14:49.541685   69781 crio.go:462] duration metric: took 1.638506017s to copy over tarball
	I1213 20:14:49.541765   69781 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 20:14:52.394887   69781 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.853083755s)
	I1213 20:14:52.394921   69781 crio.go:469] duration metric: took 2.853202053s to extract the tarball
	I1213 20:14:52.394931   69781 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 20:14:52.438998   69781 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 20:14:52.487094   69781 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1213 20:14:52.487119   69781 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 20:14:52.487216   69781 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:14:52.487251   69781 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1213 20:14:52.487257   69781 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1213 20:14:52.487216   69781 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:14:52.487277   69781 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:14:52.487318   69781 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:14:52.487319   69781 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:14:52.487343   69781 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1213 20:14:52.488825   69781 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:14:52.489231   69781 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:14:52.489238   69781 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:14:52.489231   69781 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1213 20:14:52.489315   69781 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1213 20:14:52.489318   69781 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1213 20:14:52.489231   69781 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:14:52.489388   69781 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:14:52.692976   69781 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:14:52.733024   69781 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1213 20:14:52.733074   69781 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:14:52.733129   69781 ssh_runner.go:195] Run: which crictl
	I1213 20:14:52.733945   69781 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:14:52.737479   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:14:52.745039   69781 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1213 20:14:52.745040   69781 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:14:52.754446   69781 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1213 20:14:52.761966   69781 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:14:52.774387   69781 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1213 20:14:52.860282   69781 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1213 20:14:52.860342   69781 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:14:52.860346   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:14:52.860386   69781 ssh_runner.go:195] Run: which crictl
	I1213 20:14:52.915064   69781 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1213 20:14:52.915142   69781 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1213 20:14:52.915195   69781 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1213 20:14:52.915087   69781 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1213 20:14:52.915222   69781 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:14:52.915228   69781 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1213 20:14:52.915256   69781 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:14:52.915302   69781 ssh_runner.go:195] Run: which crictl
	I1213 20:14:52.915314   69781 ssh_runner.go:195] Run: which crictl
	I1213 20:14:52.915195   69781 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1213 20:14:52.915359   69781 ssh_runner.go:195] Run: which crictl
	I1213 20:14:52.915260   69781 ssh_runner.go:195] Run: which crictl
	I1213 20:14:52.922292   69781 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1213 20:14:52.922340   69781 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1213 20:14:52.922377   69781 ssh_runner.go:195] Run: which crictl
	I1213 20:14:52.943494   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:14:52.943557   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1213 20:14:52.943599   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:14:52.943605   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:14:52.943652   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1213 20:14:52.943657   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:14:52.943711   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1213 20:14:53.100132   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1213 20:14:53.100206   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:14:53.100274   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1213 20:14:53.103903   69781 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1213 20:14:53.104083   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:14:53.104113   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1213 20:14:53.104169   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:14:53.222538   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1213 20:14:53.222561   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1213 20:14:53.224147   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:14:53.241331   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1213 20:14:53.241360   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:14:53.241428   69781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:14:53.342687   69781 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1213 20:14:53.344816   69781 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1213 20:14:53.344853   69781 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1213 20:14:53.356629   69781 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1213 20:14:53.370335   69781 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1213 20:14:53.370344   69781 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1213 20:14:53.728675   69781 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:14:53.878504   69781 cache_images.go:92] duration metric: took 1.39136662s to LoadCachedImages
	W1213 20:14:53.878622   69781 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1213 20:14:53.878642   69781 kubeadm.go:934] updating node { 192.168.72.134 8443 v1.20.0 crio true true} ...
	I1213 20:14:53.878755   69781 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-613355 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-613355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 20:14:53.878826   69781 ssh_runner.go:195] Run: crio config
	I1213 20:14:53.926700   69781 cni.go:84] Creating CNI manager for ""
	I1213 20:14:53.926732   69781 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:14:53.926753   69781 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1213 20:14:53.926778   69781 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.134 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-613355 NodeName:old-k8s-version-613355 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1213 20:14:53.926968   69781 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-613355"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 20:14:53.927039   69781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1213 20:14:53.937678   69781 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 20:14:53.937743   69781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 20:14:53.947532   69781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1213 20:14:53.966183   69781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 20:14:53.983161   69781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1213 20:14:53.999902   69781 ssh_runner.go:195] Run: grep 192.168.72.134	control-plane.minikube.internal$ /etc/hosts
	I1213 20:14:54.003707   69781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 20:14:54.015514   69781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:14:54.143120   69781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 20:14:54.159910   69781 certs.go:68] Setting up /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355 for IP: 192.168.72.134
	I1213 20:14:54.159938   69781 certs.go:194] generating shared ca certs ...
	I1213 20:14:54.159958   69781 certs.go:226] acquiring lock for ca certs: {Name:mka8994129240986519f4b0ac41f1e4e27ada985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:14:54.160198   69781 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key
	I1213 20:14:54.160268   69781 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key
	I1213 20:14:54.160280   69781 certs.go:256] generating profile certs ...
	I1213 20:14:54.160355   69781 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/client.key
	I1213 20:14:54.160374   69781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/client.crt with IP's: []
	I1213 20:14:54.270025   69781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/client.crt ...
	I1213 20:14:54.270056   69781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/client.crt: {Name:mk4eae248734cfbc5c03e09f3ea05bf9a32990e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:14:54.270243   69781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/client.key ...
	I1213 20:14:54.270260   69781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/client.key: {Name:mkc4f00b27e8570693c6dbfae7f9b195b702bb3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:14:54.270364   69781 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/apiserver.key.60799339
	I1213 20:14:54.270384   69781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/apiserver.crt.60799339 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.134]
	I1213 20:14:54.378654   69781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/apiserver.crt.60799339 ...
	I1213 20:14:54.378685   69781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/apiserver.crt.60799339: {Name:mk0d02cc9943e0bee2b8171e219a061673eec0f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:14:54.378873   69781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/apiserver.key.60799339 ...
	I1213 20:14:54.378894   69781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/apiserver.key.60799339: {Name:mka8058d0329a8092442d1d1e62a6016006ce9be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:14:54.379016   69781 certs.go:381] copying /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/apiserver.crt.60799339 -> /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/apiserver.crt
	I1213 20:14:54.379138   69781 certs.go:385] copying /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/apiserver.key.60799339 -> /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/apiserver.key
	I1213 20:14:54.379232   69781 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/proxy-client.key
	I1213 20:14:54.379258   69781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/proxy-client.crt with IP's: []
	I1213 20:14:54.523849   69781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/proxy-client.crt ...
	I1213 20:14:54.523874   69781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/proxy-client.crt: {Name:mke07b82f8baacf097bfc15d324a16db7f0cf7ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:14:54.524043   69781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/proxy-client.key ...
	I1213 20:14:54.524059   69781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/proxy-client.key: {Name:mk507ecb0b83c13d20ca7b63ef5c61102285b74e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:14:54.524268   69781 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544.pem (1338 bytes)
	W1213 20:14:54.524305   69781 certs.go:480] ignoring /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544_empty.pem, impossibly tiny 0 bytes
	I1213 20:14:54.524315   69781 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem (1679 bytes)
	I1213 20:14:54.524338   69781 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem (1082 bytes)
	I1213 20:14:54.524376   69781 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem (1123 bytes)
	I1213 20:14:54.524403   69781 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem (1675 bytes)
	I1213 20:14:54.524443   69781 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem (1708 bytes)
	I1213 20:14:54.525017   69781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 20:14:54.550315   69781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 20:14:54.577057   69781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 20:14:54.599670   69781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 20:14:54.621563   69781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 20:14:54.648533   69781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 20:14:54.673044   69781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 20:14:54.697616   69781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 20:14:54.721354   69781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem --> /usr/share/ca-certificates/195442.pem (1708 bytes)
	I1213 20:14:54.742644   69781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 20:14:54.765755   69781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544.pem --> /usr/share/ca-certificates/19544.pem (1338 bytes)
	I1213 20:14:54.787831   69781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 20:14:54.803318   69781 ssh_runner.go:195] Run: openssl version
	I1213 20:14:54.808811   69781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/195442.pem && ln -fs /usr/share/ca-certificates/195442.pem /etc/ssl/certs/195442.pem"
	I1213 20:14:54.819096   69781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/195442.pem
	I1213 20:14:54.823459   69781 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 19:13 /usr/share/ca-certificates/195442.pem
	I1213 20:14:54.823512   69781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/195442.pem
	I1213 20:14:54.829085   69781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/195442.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 20:14:54.840497   69781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 20:14:54.850933   69781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:14:54.855250   69781 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:14:54.855308   69781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:14:54.860672   69781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 20:14:54.871493   69781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19544.pem && ln -fs /usr/share/ca-certificates/19544.pem /etc/ssl/certs/19544.pem"
	I1213 20:14:54.882499   69781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19544.pem
	I1213 20:14:54.886924   69781 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 19:13 /usr/share/ca-certificates/19544.pem
	I1213 20:14:54.886972   69781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19544.pem
	I1213 20:14:54.892328   69781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19544.pem /etc/ssl/certs/51391683.0"
	I1213 20:14:54.902238   69781 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 20:14:54.905957   69781 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 20:14:54.906028   69781 kubeadm.go:392] StartCluster: {Name:old-k8s-version-613355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-613355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:14:54.906095   69781 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 20:14:54.906130   69781 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 20:14:54.951157   69781 cri.go:89] found id: ""
	I1213 20:14:54.951240   69781 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 20:14:54.961061   69781 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 20:14:54.970489   69781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:14:54.980129   69781 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:14:54.980151   69781 kubeadm.go:157] found existing configuration files:
	
	I1213 20:14:54.980203   69781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:14:54.991649   69781 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:14:54.991707   69781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:14:55.001720   69781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:14:55.010198   69781 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:14:55.010265   69781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:14:55.019090   69781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:14:55.027732   69781 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:14:55.027792   69781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:14:55.040649   69781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:14:55.055799   69781 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:14:55.055850   69781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:14:55.075925   69781 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 20:14:55.209825   69781 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1213 20:14:55.209887   69781 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 20:14:55.393056   69781 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 20:14:55.393214   69781 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 20:14:55.393376   69781 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 20:14:55.586332   69781 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 20:14:55.709559   69781 out.go:235]   - Generating certificates and keys ...
	I1213 20:14:55.709733   69781 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 20:14:55.709827   69781 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 20:14:55.709926   69781 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 20:14:55.722376   69781 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1213 20:14:55.944959   69781 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1213 20:14:56.027873   69781 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1213 20:14:56.351826   69781 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1213 20:14:56.352031   69781 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-613355] and IPs [192.168.72.134 127.0.0.1 ::1]
	I1213 20:14:56.506682   69781 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1213 20:14:56.506921   69781 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-613355] and IPs [192.168.72.134 127.0.0.1 ::1]
	I1213 20:14:56.745024   69781 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 20:14:57.108544   69781 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 20:14:57.235456   69781 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1213 20:14:57.235658   69781 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 20:14:57.312700   69781 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 20:14:57.583106   69781 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 20:14:57.751348   69781 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 20:14:57.902663   69781 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 20:14:57.920467   69781 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 20:14:57.921682   69781 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 20:14:57.921753   69781 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 20:14:58.048268   69781 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 20:14:58.050073   69781 out.go:235]   - Booting up control plane ...
	I1213 20:14:58.050193   69781 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 20:14:58.057571   69781 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 20:14:58.058356   69781 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 20:14:58.059141   69781 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 20:14:58.063193   69781 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 20:15:38.061707   69781 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1213 20:15:38.062057   69781 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:15:38.062329   69781 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:15:43.062925   69781 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:15:43.063172   69781 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:15:53.063566   69781 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:15:53.063811   69781 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:16:13.064891   69781 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:16:13.065118   69781 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:16:53.065688   69781 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:16:53.066257   69781 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:16:53.066300   69781 kubeadm.go:310] 
	I1213 20:16:53.066389   69781 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1213 20:16:53.066488   69781 kubeadm.go:310] 		timed out waiting for the condition
	I1213 20:16:53.066502   69781 kubeadm.go:310] 
	I1213 20:16:53.066592   69781 kubeadm.go:310] 	This error is likely caused by:
	I1213 20:16:53.066693   69781 kubeadm.go:310] 		- The kubelet is not running
	I1213 20:16:53.066951   69781 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 20:16:53.066964   69781 kubeadm.go:310] 
	I1213 20:16:53.067205   69781 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 20:16:53.067292   69781 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1213 20:16:53.067398   69781 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1213 20:16:53.067425   69781 kubeadm.go:310] 
	I1213 20:16:53.067712   69781 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1213 20:16:53.067930   69781 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1213 20:16:53.067946   69781 kubeadm.go:310] 
	I1213 20:16:53.068192   69781 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1213 20:16:53.068418   69781 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1213 20:16:53.068579   69781 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1213 20:16:53.068935   69781 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1213 20:16:53.068965   69781 kubeadm.go:310] 
	I1213 20:16:53.069154   69781 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 20:16:53.069411   69781 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1213 20:16:53.069804   69781 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1213 20:16:53.069896   69781 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-613355] and IPs [192.168.72.134 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-613355] and IPs [192.168.72.134 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-613355] and IPs [192.168.72.134 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-613355] and IPs [192.168.72.134 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 20:16:53.069946   69781 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 20:16:53.849083   69781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:16:53.862747   69781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:16:53.872033   69781 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:16:53.872055   69781 kubeadm.go:157] found existing configuration files:
	
	I1213 20:16:53.872104   69781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:16:53.880753   69781 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:16:53.880811   69781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:16:53.889511   69781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:16:53.897751   69781 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:16:53.897813   69781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:16:53.907026   69781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:16:53.915439   69781 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:16:53.915494   69781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:16:53.925025   69781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:16:53.934075   69781 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:16:53.934131   69781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:16:53.943490   69781 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 20:16:54.160157   69781 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 20:18:50.234630   69781 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1213 20:18:50.234774   69781 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1213 20:18:50.237133   69781 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1213 20:18:50.237220   69781 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 20:18:50.237324   69781 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 20:18:50.237448   69781 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 20:18:50.237564   69781 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 20:18:50.237646   69781 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 20:18:50.239582   69781 out.go:235]   - Generating certificates and keys ...
	I1213 20:18:50.239697   69781 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 20:18:50.239796   69781 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 20:18:50.239922   69781 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 20:18:50.240021   69781 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1213 20:18:50.240131   69781 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 20:18:50.240216   69781 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1213 20:18:50.240315   69781 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1213 20:18:50.240399   69781 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1213 20:18:50.240546   69781 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 20:18:50.240664   69781 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 20:18:50.240723   69781 kubeadm.go:310] [certs] Using the existing "sa" key
	I1213 20:18:50.240800   69781 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 20:18:50.240872   69781 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 20:18:50.240949   69781 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 20:18:50.241031   69781 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 20:18:50.241111   69781 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 20:18:50.241267   69781 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 20:18:50.241386   69781 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 20:18:50.241446   69781 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 20:18:50.241563   69781 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 20:18:50.242921   69781 out.go:235]   - Booting up control plane ...
	I1213 20:18:50.243047   69781 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 20:18:50.243155   69781 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 20:18:50.243259   69781 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 20:18:50.243387   69781 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 20:18:50.243616   69781 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 20:18:50.243703   69781 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1213 20:18:50.243803   69781 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:18:50.244065   69781 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:18:50.244177   69781 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:18:50.244376   69781 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:18:50.244473   69781 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:18:50.244680   69781 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:18:50.244789   69781 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:18:50.245004   69781 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:18:50.245085   69781 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:18:50.245342   69781 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:18:50.245366   69781 kubeadm.go:310] 
	I1213 20:18:50.245427   69781 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1213 20:18:50.245482   69781 kubeadm.go:310] 		timed out waiting for the condition
	I1213 20:18:50.245493   69781 kubeadm.go:310] 
	I1213 20:18:50.245533   69781 kubeadm.go:310] 	This error is likely caused by:
	I1213 20:18:50.245583   69781 kubeadm.go:310] 		- The kubelet is not running
	I1213 20:18:50.245736   69781 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 20:18:50.245756   69781 kubeadm.go:310] 
	I1213 20:18:50.245874   69781 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 20:18:50.245913   69781 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1213 20:18:50.245946   69781 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1213 20:18:50.245956   69781 kubeadm.go:310] 
	I1213 20:18:50.246071   69781 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1213 20:18:50.246186   69781 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1213 20:18:50.246204   69781 kubeadm.go:310] 
	I1213 20:18:50.246349   69781 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1213 20:18:50.246480   69781 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1213 20:18:50.246584   69781 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1213 20:18:50.246687   69781 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1213 20:18:50.246707   69781 kubeadm.go:310] 
	I1213 20:18:50.246759   69781 kubeadm.go:394] duration metric: took 3m55.340749902s to StartCluster
	I1213 20:18:50.246819   69781 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:18:50.246904   69781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:18:50.304101   69781 cri.go:89] found id: ""
	I1213 20:18:50.304131   69781 logs.go:282] 0 containers: []
	W1213 20:18:50.304142   69781 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:18:50.304149   69781 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:18:50.304202   69781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:18:50.343240   69781 cri.go:89] found id: ""
	I1213 20:18:50.343295   69781 logs.go:282] 0 containers: []
	W1213 20:18:50.343308   69781 logs.go:284] No container was found matching "etcd"
	I1213 20:18:50.343316   69781 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:18:50.343395   69781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:18:50.379163   69781 cri.go:89] found id: ""
	I1213 20:18:50.379204   69781 logs.go:282] 0 containers: []
	W1213 20:18:50.379214   69781 logs.go:284] No container was found matching "coredns"
	I1213 20:18:50.379226   69781 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:18:50.379293   69781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:18:50.414322   69781 cri.go:89] found id: ""
	I1213 20:18:50.414356   69781 logs.go:282] 0 containers: []
	W1213 20:18:50.414367   69781 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:18:50.414374   69781 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:18:50.414421   69781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:18:50.458116   69781 cri.go:89] found id: ""
	I1213 20:18:50.458148   69781 logs.go:282] 0 containers: []
	W1213 20:18:50.458158   69781 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:18:50.458165   69781 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:18:50.458228   69781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:18:50.497684   69781 cri.go:89] found id: ""
	I1213 20:18:50.497715   69781 logs.go:282] 0 containers: []
	W1213 20:18:50.497724   69781 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:18:50.497731   69781 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:18:50.497788   69781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:18:50.536133   69781 cri.go:89] found id: ""
	I1213 20:18:50.536162   69781 logs.go:282] 0 containers: []
	W1213 20:18:50.536174   69781 logs.go:284] No container was found matching "kindnet"
	I1213 20:18:50.536186   69781 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:18:50.536201   69781 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:18:50.660268   69781 logs.go:123] Gathering logs for container status ...
	I1213 20:18:50.660307   69781 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:18:50.710938   69781 logs.go:123] Gathering logs for kubelet ...
	I1213 20:18:50.710975   69781 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:18:50.761913   69781 logs.go:123] Gathering logs for dmesg ...
	I1213 20:18:50.761948   69781 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:18:50.779769   69781 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:18:50.779809   69781 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:18:50.933511   69781 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 20:18:50.933547   69781 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1213 20:18:50.933594   69781 out.go:270] * 
	* 
	W1213 20:18:50.933664   69781 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 20:18:50.933692   69781 out.go:270] * 
	* 
	W1213 20:18:50.934660   69781 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 20:18:50.937346   69781 out.go:201] 
	W1213 20:18:50.938622   69781 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 20:18:50.938666   69781 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 20:18:50.938696   69781 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 20:18:50.940160   69781 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-613355 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-613355 -n old-k8s-version-613355
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-613355 -n old-k8s-version-613355: exit status 6 (254.153259ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 20:18:51.238191   77685 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-613355" does not appear in /home/jenkins/minikube-integration/20090-12353/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-613355" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (300.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-613355 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-613355 create -f testdata/busybox.yaml: exit status 1 (60.979821ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-613355" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-613355 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-613355 -n old-k8s-version-613355
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-613355 -n old-k8s-version-613355: exit status 6 (252.234142ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 20:18:51.556683   77724 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-613355" does not appear in /home/jenkins/minikube-integration/20090-12353/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-613355" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-613355 -n old-k8s-version-613355
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-613355 -n old-k8s-version-613355: exit status 6 (256.995946ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 20:18:51.812374   77755 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-613355" does not appear in /home/jenkins/minikube-integration/20090-12353/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-613355" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (109.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-613355 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1213 20:19:01.059826   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:01.066212   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:01.077565   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:01.098912   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:01.140323   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:01.221749   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:01.383289   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:01.705117   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:02.347106   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:03.629502   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:05.105330   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:06.190968   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:11.312826   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:21.555126   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:25.048967   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:41.598604   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:41.604964   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:41.616339   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:41.637739   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:41.679145   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:41.760959   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:41.922217   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:42.036721   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:42.244275   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:42.886552   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:44.009556   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:44.168365   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:46.067002   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:46.730326   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:19:51.851804   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:20:02.093537   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:20:11.359215   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:20:22.575096   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:20:22.998096   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:20:23.338118   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:20:23.344448   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:20:23.355794   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:20:23.377162   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:20:23.418606   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:20:23.500092   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:20:23.661665   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:20:23.983401   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:20:24.625338   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:20:25.907567   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:20:28.468838   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:20:33.590330   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-613355 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m49.013104196s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-613355 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-613355 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-613355 describe deploy/metrics-server -n kube-system: exit status 1 (44.39821ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-613355" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-613355 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-613355 -n old-k8s-version-613355
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-613355 -n old-k8s-version-613355: exit status 6 (224.634708ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 20:20:41.097387   78231 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-613355" does not appear in /home/jenkins/minikube-integration/20090-12353/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-613355" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (109.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (508.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-613355 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1213 20:20:46.970888   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:20:55.165338   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:21:03.537296   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:21:04.314014   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:21:07.083380   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:21:07.988752   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:21:22.868924   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:21:44.919647   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:21:45.275432   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-613355 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m26.48912108s)

                                                
                                                
-- stdout --
	* [old-k8s-version-613355] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-613355" primary control-plane node in "old-k8s-version-613355" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-613355" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 20:20:46.647515   78367 out.go:345] Setting OutFile to fd 1 ...
	I1213 20:20:46.647664   78367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:20:46.647675   78367 out.go:358] Setting ErrFile to fd 2...
	I1213 20:20:46.647681   78367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:20:46.647864   78367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
	I1213 20:20:46.648432   78367 out.go:352] Setting JSON to false
	I1213 20:20:46.649359   78367 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7390,"bootTime":1734113857,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 20:20:46.649452   78367 start.go:139] virtualization: kvm guest
	I1213 20:20:46.651687   78367 out.go:177] * [old-k8s-version-613355] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 20:20:46.653023   78367 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 20:20:46.653051   78367 notify.go:220] Checking for updates...
	I1213 20:20:46.655530   78367 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 20:20:46.656749   78367 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:20:46.657866   78367 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 20:20:46.659052   78367 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 20:20:46.660319   78367 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 20:20:46.661917   78367 config.go:182] Loaded profile config "old-k8s-version-613355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1213 20:20:46.662362   78367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:20:46.662419   78367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:20:46.678189   78367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40347
	I1213 20:20:46.678547   78367 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:20:46.679144   78367 main.go:141] libmachine: Using API Version  1
	I1213 20:20:46.679166   78367 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:20:46.679486   78367 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:20:46.679656   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .DriverName
	I1213 20:20:46.681367   78367 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1213 20:20:46.682497   78367 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 20:20:46.682787   78367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:20:46.682821   78367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:20:46.697924   78367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33425
	I1213 20:20:46.698406   78367 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:20:46.698977   78367 main.go:141] libmachine: Using API Version  1
	I1213 20:20:46.699012   78367 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:20:46.699310   78367 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:20:46.699500   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .DriverName
	I1213 20:20:46.735748   78367 out.go:177] * Using the kvm2 driver based on existing profile
	I1213 20:20:46.736801   78367 start.go:297] selected driver: kvm2
	I1213 20:20:46.736817   78367 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-613355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-613355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:20:46.736950   78367 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 20:20:46.737918   78367 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 20:20:46.738022   78367 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20090-12353/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 20:20:46.753439   78367 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1213 20:20:46.754006   78367 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 20:20:46.754050   78367 cni.go:84] Creating CNI manager for ""
	I1213 20:20:46.754102   78367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:20:46.754138   78367 start.go:340] cluster config:
	{Name:old-k8s-version-613355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-613355 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:20:46.754240   78367 iso.go:125] acquiring lock: {Name:mkd84f6661a5214d8c2d3a40ad448351a88bfd1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 20:20:46.756168   78367 out.go:177] * Starting "old-k8s-version-613355" primary control-plane node in "old-k8s-version-613355" cluster
	I1213 20:20:46.757240   78367 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1213 20:20:46.757316   78367 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1213 20:20:46.757331   78367 cache.go:56] Caching tarball of preloaded images
	I1213 20:20:46.757413   78367 preload.go:172] Found /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 20:20:46.757428   78367 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1213 20:20:46.757566   78367 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/config.json ...
	I1213 20:20:46.757773   78367 start.go:360] acquireMachinesLock for old-k8s-version-613355: {Name:mkc278ae0927dbec7538ca4f7c13001e5f3abc49 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 20:20:46.757815   78367 start.go:364] duration metric: took 23.613µs to acquireMachinesLock for "old-k8s-version-613355"
	I1213 20:20:46.757832   78367 start.go:96] Skipping create...Using existing machine configuration
	I1213 20:20:46.757840   78367 fix.go:54] fixHost starting: 
	I1213 20:20:46.758080   78367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:20:46.758109   78367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:20:46.773346   78367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44079
	I1213 20:20:46.773790   78367 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:20:46.774494   78367 main.go:141] libmachine: Using API Version  1
	I1213 20:20:46.774523   78367 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:20:46.774860   78367 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:20:46.775063   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .DriverName
	I1213 20:20:46.775187   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetState
	I1213 20:20:46.776885   78367 fix.go:112] recreateIfNeeded on old-k8s-version-613355: state=Stopped err=<nil>
	I1213 20:20:46.776914   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .DriverName
	W1213 20:20:46.777064   78367 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 20:20:46.778805   78367 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-613355" ...
	I1213 20:20:46.779853   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .Start
	I1213 20:20:46.780047   78367 main.go:141] libmachine: (old-k8s-version-613355) starting domain...
	I1213 20:20:46.780070   78367 main.go:141] libmachine: (old-k8s-version-613355) ensuring networks are active...
	I1213 20:20:46.780777   78367 main.go:141] libmachine: (old-k8s-version-613355) Ensuring network default is active
	I1213 20:20:46.781077   78367 main.go:141] libmachine: (old-k8s-version-613355) Ensuring network mk-old-k8s-version-613355 is active
	I1213 20:20:46.781633   78367 main.go:141] libmachine: (old-k8s-version-613355) getting domain XML...
	I1213 20:20:46.782411   78367 main.go:141] libmachine: (old-k8s-version-613355) creating domain...
	I1213 20:20:48.045646   78367 main.go:141] libmachine: (old-k8s-version-613355) waiting for IP...
	I1213 20:20:48.046659   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:20:48.047228   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:20:48.047327   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:20:48.047234   78403 retry.go:31] will retry after 254.504632ms: waiting for domain to come up
	I1213 20:20:48.303792   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:20:48.304331   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:20:48.304379   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:20:48.304278   78403 retry.go:31] will retry after 321.731272ms: waiting for domain to come up
	I1213 20:20:48.627848   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:20:48.628438   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:20:48.628461   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:20:48.628399   78403 retry.go:31] will retry after 436.95851ms: waiting for domain to come up
	I1213 20:20:49.067167   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:20:49.067730   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:20:49.067751   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:20:49.067693   78403 retry.go:31] will retry after 501.071212ms: waiting for domain to come up
	I1213 20:20:49.570261   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:20:49.570732   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:20:49.570753   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:20:49.570709   78403 retry.go:31] will retry after 740.206231ms: waiting for domain to come up
	I1213 20:20:50.312955   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:20:50.313404   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:20:50.313432   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:20:50.313374   78403 retry.go:31] will retry after 746.831016ms: waiting for domain to come up
	I1213 20:20:51.062536   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:20:51.063216   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:20:51.063273   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:20:51.063169   78403 retry.go:31] will retry after 937.797094ms: waiting for domain to come up
	I1213 20:20:52.002347   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:20:52.002817   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:20:52.002841   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:20:52.002796   78403 retry.go:31] will retry after 1.243637968s: waiting for domain to come up
	I1213 20:20:53.248089   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:20:53.248580   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:20:53.248610   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:20:53.248543   78403 retry.go:31] will retry after 1.157914572s: waiting for domain to come up
	I1213 20:20:54.407473   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:20:54.408007   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:20:54.408039   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:20:54.407968   78403 retry.go:31] will retry after 1.513958799s: waiting for domain to come up
	I1213 20:20:55.923792   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:20:55.924353   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:20:55.924378   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:20:55.924328   78403 retry.go:31] will retry after 2.014444874s: waiting for domain to come up
	I1213 20:20:57.940288   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:20:57.940857   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:20:57.940912   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:20:57.940832   78403 retry.go:31] will retry after 3.360376331s: waiting for domain to come up
	I1213 20:21:01.303931   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:01.304527   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | unable to find current IP address of domain old-k8s-version-613355 in network mk-old-k8s-version-613355
	I1213 20:21:01.304550   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | I1213 20:21:01.304491   78403 retry.go:31] will retry after 4.341986516s: waiting for domain to come up
	I1213 20:21:05.649183   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:05.649771   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has current primary IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:05.649797   78367 main.go:141] libmachine: (old-k8s-version-613355) found domain IP: 192.168.72.134
	I1213 20:21:05.649836   78367 main.go:141] libmachine: (old-k8s-version-613355) reserving static IP address...
	I1213 20:21:05.650236   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "old-k8s-version-613355", mac: "52:54:00:d3:40:ab", ip: "192.168.72.134"} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:21:05.650266   78367 main.go:141] libmachine: (old-k8s-version-613355) reserved static IP address 192.168.72.134 for domain old-k8s-version-613355
	I1213 20:21:05.650296   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | skip adding static IP to network mk-old-k8s-version-613355 - found existing host DHCP lease matching {name: "old-k8s-version-613355", mac: "52:54:00:d3:40:ab", ip: "192.168.72.134"}
	I1213 20:21:05.650321   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | Getting to WaitForSSH function...
	I1213 20:21:05.650335   78367 main.go:141] libmachine: (old-k8s-version-613355) waiting for SSH...
	I1213 20:21:05.652666   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:05.653000   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:21:05.653035   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:05.653133   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | Using SSH client type: external
	I1213 20:21:05.653173   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | Using SSH private key: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355/id_rsa (-rw-------)
	I1213 20:21:05.653209   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 20:21:05.653223   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | About to run SSH command:
	I1213 20:21:05.653235   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | exit 0
	I1213 20:21:05.783071   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | SSH cmd err, output: <nil>: 
	I1213 20:21:05.783409   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetConfigRaw
	I1213 20:21:05.784020   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetIP
	I1213 20:21:05.786624   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:05.787007   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:21:05.787039   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:05.787273   78367 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/config.json ...
	I1213 20:21:05.787458   78367 machine.go:93] provisionDockerMachine start ...
	I1213 20:21:05.787476   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .DriverName
	I1213 20:21:05.787655   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:21:05.790042   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:05.790417   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:21:05.790439   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:05.790583   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHPort
	I1213 20:21:05.790757   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:21:05.790920   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:21:05.791064   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHUsername
	I1213 20:21:05.791218   78367 main.go:141] libmachine: Using SSH client type: native
	I1213 20:21:05.791442   78367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1213 20:21:05.791456   78367 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 20:21:05.902939   78367 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 20:21:05.902966   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetMachineName
	I1213 20:21:05.903181   78367 buildroot.go:166] provisioning hostname "old-k8s-version-613355"
	I1213 20:21:05.903210   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetMachineName
	I1213 20:21:05.903399   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:21:05.906237   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:05.906745   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:21:05.906786   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:05.906917   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHPort
	I1213 20:21:05.907125   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:21:05.907310   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:21:05.907462   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHUsername
	I1213 20:21:05.907628   78367 main.go:141] libmachine: Using SSH client type: native
	I1213 20:21:05.907841   78367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1213 20:21:05.907859   78367 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-613355 && echo "old-k8s-version-613355" | sudo tee /etc/hostname
	I1213 20:21:06.037663   78367 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-613355
	
	I1213 20:21:06.037697   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:21:06.041143   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:06.041620   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:21:06.041650   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:06.041857   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHPort
	I1213 20:21:06.042097   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:21:06.042260   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:21:06.042429   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHUsername
	I1213 20:21:06.042656   78367 main.go:141] libmachine: Using SSH client type: native
	I1213 20:21:06.042836   78367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1213 20:21:06.042871   78367 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-613355' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-613355/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-613355' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 20:21:06.168234   78367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 20:21:06.168258   78367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20090-12353/.minikube CaCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20090-12353/.minikube}
	I1213 20:21:06.168288   78367 buildroot.go:174] setting up certificates
	I1213 20:21:06.168300   78367 provision.go:84] configureAuth start
	I1213 20:21:06.168312   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetMachineName
	I1213 20:21:06.168578   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetIP
	I1213 20:21:06.170796   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:06.171175   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:21:06.171219   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:06.171308   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:21:06.173888   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:06.174246   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:21:06.174292   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:06.174415   78367 provision.go:143] copyHostCerts
	I1213 20:21:06.174483   78367 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem, removing ...
	I1213 20:21:06.174500   78367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem
	I1213 20:21:06.174581   78367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem (1123 bytes)
	I1213 20:21:06.174738   78367 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem, removing ...
	I1213 20:21:06.174753   78367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem
	I1213 20:21:06.174798   78367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem (1675 bytes)
	I1213 20:21:06.174935   78367 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem, removing ...
	I1213 20:21:06.174950   78367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem
	I1213 20:21:06.174994   78367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem (1082 bytes)
	I1213 20:21:06.175104   78367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-613355 san=[127.0.0.1 192.168.72.134 localhost minikube old-k8s-version-613355]
	I1213 20:21:06.394792   78367 provision.go:177] copyRemoteCerts
	I1213 20:21:06.394887   78367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 20:21:06.394918   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:21:06.397873   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:06.398190   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:21:06.398229   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:06.398395   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHPort
	I1213 20:21:06.398567   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:21:06.398708   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHUsername
	I1213 20:21:06.398823   78367 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355/id_rsa Username:docker}
	I1213 20:21:06.485237   78367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 20:21:06.509629   78367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1213 20:21:06.531994   78367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 20:21:06.554767   78367 provision.go:87] duration metric: took 386.454673ms to configureAuth
	I1213 20:21:06.554798   78367 buildroot.go:189] setting minikube options for container-runtime
	I1213 20:21:06.555036   78367 config.go:182] Loaded profile config "old-k8s-version-613355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1213 20:21:06.555126   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:21:06.558020   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:06.558418   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:21:06.558447   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:06.558689   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHPort
	I1213 20:21:06.558906   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:21:06.559105   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:21:06.559263   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHUsername
	I1213 20:21:06.559442   78367 main.go:141] libmachine: Using SSH client type: native
	I1213 20:21:06.559635   78367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1213 20:21:06.559651   78367 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 20:21:06.793380   78367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 20:21:06.793405   78367 machine.go:96] duration metric: took 1.005934896s to provisionDockerMachine
	I1213 20:21:06.793420   78367 start.go:293] postStartSetup for "old-k8s-version-613355" (driver="kvm2")
	I1213 20:21:06.793434   78367 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 20:21:06.793452   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .DriverName
	I1213 20:21:06.793836   78367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 20:21:06.793870   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:21:06.796607   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:06.797013   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:21:06.797038   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:06.797203   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHPort
	I1213 20:21:06.797399   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:21:06.797553   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHUsername
	I1213 20:21:06.797715   78367 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355/id_rsa Username:docker}
	I1213 20:21:06.885188   78367 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 20:21:06.888959   78367 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 20:21:06.888988   78367 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-12353/.minikube/addons for local assets ...
	I1213 20:21:06.889044   78367 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-12353/.minikube/files for local assets ...
	I1213 20:21:06.889132   78367 filesync.go:149] local asset: /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem -> 195442.pem in /etc/ssl/certs
	I1213 20:21:06.889222   78367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 20:21:06.898238   78367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem --> /etc/ssl/certs/195442.pem (1708 bytes)
	I1213 20:21:06.921147   78367 start.go:296] duration metric: took 127.711904ms for postStartSetup
	I1213 20:21:06.921187   78367 fix.go:56] duration metric: took 20.163345853s for fixHost
	I1213 20:21:06.921254   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:21:06.923949   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:06.924363   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:21:06.924392   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:06.924623   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHPort
	I1213 20:21:06.924811   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:21:06.924986   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:21:06.925145   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHUsername
	I1213 20:21:06.925339   78367 main.go:141] libmachine: Using SSH client type: native
	I1213 20:21:06.925498   78367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1213 20:21:06.925514   78367 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 20:21:07.039760   78367 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734121267.013688964
	
	I1213 20:21:07.039788   78367 fix.go:216] guest clock: 1734121267.013688964
	I1213 20:21:07.039799   78367 fix.go:229] Guest: 2024-12-13 20:21:07.013688964 +0000 UTC Remote: 2024-12-13 20:21:06.921191537 +0000 UTC m=+20.313496689 (delta=92.497427ms)
	I1213 20:21:07.039867   78367 fix.go:200] guest clock delta is within tolerance: 92.497427ms
	I1213 20:21:07.039883   78367 start.go:83] releasing machines lock for "old-k8s-version-613355", held for 20.282056416s
	I1213 20:21:07.039916   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .DriverName
	I1213 20:21:07.040171   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetIP
	I1213 20:21:07.042684   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:07.043053   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:21:07.043081   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:07.043209   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .DriverName
	I1213 20:21:07.043660   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .DriverName
	I1213 20:21:07.043843   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .DriverName
	I1213 20:21:07.043902   78367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 20:21:07.043949   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:21:07.044041   78367 ssh_runner.go:195] Run: cat /version.json
	I1213 20:21:07.044067   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHHostname
	I1213 20:21:07.046423   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:07.046767   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:21:07.046788   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:07.046810   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:07.046952   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHPort
	I1213 20:21:07.047139   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:21:07.047250   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHUsername
	I1213 20:21:07.047313   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:21:07.047334   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:07.047446   78367 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355/id_rsa Username:docker}
	I1213 20:21:07.047664   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHPort
	I1213 20:21:07.047830   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHKeyPath
	I1213 20:21:07.047962   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetSSHUsername
	I1213 20:21:07.048087   78367 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/old-k8s-version-613355/id_rsa Username:docker}
	I1213 20:21:07.161161   78367 ssh_runner.go:195] Run: systemctl --version
	I1213 20:21:07.166928   78367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 20:21:07.312959   78367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 20:21:07.319980   78367 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 20:21:07.320035   78367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 20:21:07.335498   78367 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 20:21:07.335519   78367 start.go:495] detecting cgroup driver to use...
	I1213 20:21:07.335579   78367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 20:21:07.350285   78367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 20:21:07.363241   78367 docker.go:217] disabling cri-docker service (if available) ...
	I1213 20:21:07.363293   78367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 20:21:07.376279   78367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 20:21:07.389477   78367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 20:21:07.506256   78367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 20:21:07.641168   78367 docker.go:233] disabling docker service ...
	I1213 20:21:07.641243   78367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 20:21:07.655253   78367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 20:21:07.667120   78367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 20:21:07.801757   78367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 20:21:07.930399   78367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 20:21:07.944701   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 20:21:07.962472   78367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1213 20:21:07.962539   78367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:21:07.971922   78367 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 20:21:07.971994   78367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:21:07.981444   78367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:21:07.990713   78367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:21:08.000016   78367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 20:21:08.009403   78367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 20:21:08.017585   78367 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 20:21:08.017636   78367 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 20:21:08.028845   78367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 20:21:08.037423   78367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:21:08.147564   78367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 20:21:08.231664   78367 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 20:21:08.231748   78367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 20:21:08.236044   78367 start.go:563] Will wait 60s for crictl version
	I1213 20:21:08.236093   78367 ssh_runner.go:195] Run: which crictl
	I1213 20:21:08.240171   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 20:21:08.286091   78367 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 20:21:08.286187   78367 ssh_runner.go:195] Run: crio --version
	I1213 20:21:08.314291   78367 ssh_runner.go:195] Run: crio --version
	I1213 20:21:08.343647   78367 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1213 20:21:08.344945   78367 main.go:141] libmachine: (old-k8s-version-613355) Calling .GetIP
	I1213 20:21:08.347761   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:08.348143   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:40:ab", ip: ""} in network mk-old-k8s-version-613355: {Iface:virbr3 ExpiryTime:2024-12-13 21:20:58 +0000 UTC Type:0 Mac:52:54:00:d3:40:ab Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:old-k8s-version-613355 Clientid:01:52:54:00:d3:40:ab}
	I1213 20:21:08.348181   78367 main.go:141] libmachine: (old-k8s-version-613355) DBG | domain old-k8s-version-613355 has defined IP address 192.168.72.134 and MAC address 52:54:00:d3:40:ab in network mk-old-k8s-version-613355
	I1213 20:21:08.348367   78367 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1213 20:21:08.352296   78367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 20:21:08.364585   78367 kubeadm.go:883] updating cluster {Name:old-k8s-version-613355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-613355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 20:21:08.364715   78367 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1213 20:21:08.364777   78367 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 20:21:08.408264   78367 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1213 20:21:08.408325   78367 ssh_runner.go:195] Run: which lz4
	I1213 20:21:08.411891   78367 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 20:21:08.415910   78367 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 20:21:08.415938   78367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1213 20:21:09.859545   78367 crio.go:462] duration metric: took 1.447679649s to copy over tarball
	I1213 20:21:09.859612   78367 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 20:21:12.671488   78367 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.81184612s)
	I1213 20:21:12.671520   78367 crio.go:469] duration metric: took 2.811945573s to extract the tarball
	I1213 20:21:12.671529   78367 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 20:21:12.716677   78367 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 20:21:12.749225   78367 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1213 20:21:12.749251   78367 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 20:21:12.749295   78367 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:21:12.749354   78367 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:21:12.749374   78367 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1213 20:21:12.749383   78367 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:21:12.749421   78367 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1213 20:21:12.749361   78367 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1213 20:21:12.749430   78367 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:21:12.749372   78367 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:21:12.751043   78367 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1213 20:21:12.751054   78367 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:21:12.751369   78367 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1213 20:21:12.751381   78367 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:21:12.751417   78367 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:21:12.751446   78367 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1213 20:21:12.751463   78367 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:21:12.751486   78367 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:21:12.999121   78367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1213 20:21:13.023586   78367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1213 20:21:13.027014   78367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:21:13.029078   78367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:21:13.043290   78367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:21:13.047709   78367 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1213 20:21:13.047758   78367 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1213 20:21:13.047802   78367 ssh_runner.go:195] Run: which crictl
	I1213 20:21:13.054647   78367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1213 20:21:13.075480   78367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:21:13.106085   78367 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1213 20:21:13.106129   78367 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1213 20:21:13.106169   78367 ssh_runner.go:195] Run: which crictl
	I1213 20:21:13.153356   78367 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1213 20:21:13.153410   78367 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:21:13.153464   78367 ssh_runner.go:195] Run: which crictl
	I1213 20:21:13.158087   78367 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1213 20:21:13.158119   78367 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:21:13.158153   78367 ssh_runner.go:195] Run: which crictl
	I1213 20:21:13.177977   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1213 20:21:13.178021   78367 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1213 20:21:13.178051   78367 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1213 20:21:13.178086   78367 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1213 20:21:13.178127   78367 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:21:13.178169   78367 ssh_runner.go:195] Run: which crictl
	I1213 20:21:13.178093   78367 ssh_runner.go:195] Run: which crictl
	I1213 20:21:13.194413   78367 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1213 20:21:13.194440   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1213 20:21:13.194459   78367 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:21:13.194473   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:21:13.194441   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:21:13.194499   78367 ssh_runner.go:195] Run: which crictl
	I1213 20:21:13.247218   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:21:13.247218   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1213 20:21:13.247276   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1213 20:21:13.298069   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1213 20:21:13.298116   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:21:13.298080   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:21:13.298139   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:21:13.388063   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:21:13.388125   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1213 20:21:13.388169   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1213 20:21:13.452350   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1213 20:21:13.456002   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:21:13.456103   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1213 20:21:13.456173   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1213 20:21:13.510777   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1213 20:21:13.527053   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1213 20:21:13.527139   78367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1213 20:21:13.602000   78367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1213 20:21:13.617035   78367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1213 20:21:13.622632   78367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1213 20:21:13.622665   78367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1213 20:21:13.622731   78367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1213 20:21:13.622809   78367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1213 20:21:13.648568   78367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1213 20:21:14.035144   78367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:21:14.172768   78367 cache_images.go:92] duration metric: took 1.423498408s to LoadCachedImages
	W1213 20:21:14.172866   78367 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20090-12353/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1213 20:21:14.172884   78367 kubeadm.go:934] updating node { 192.168.72.134 8443 v1.20.0 crio true true} ...
	I1213 20:21:14.173032   78367 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-613355 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-613355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 20:21:14.173121   78367 ssh_runner.go:195] Run: crio config
	I1213 20:21:14.221249   78367 cni.go:84] Creating CNI manager for ""
	I1213 20:21:14.221272   78367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:21:14.221286   78367 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1213 20:21:14.221304   78367 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.134 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-613355 NodeName:old-k8s-version-613355 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1213 20:21:14.221417   78367 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-613355"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 20:21:14.221471   78367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1213 20:21:14.231302   78367 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 20:21:14.231364   78367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 20:21:14.240398   78367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1213 20:21:14.255970   78367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 20:21:14.272347   78367 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1213 20:21:14.290331   78367 ssh_runner.go:195] Run: grep 192.168.72.134	control-plane.minikube.internal$ /etc/hosts
	I1213 20:21:14.293948   78367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 20:21:14.305059   78367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:21:14.417989   78367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 20:21:14.436814   78367 certs.go:68] Setting up /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355 for IP: 192.168.72.134
	I1213 20:21:14.436841   78367 certs.go:194] generating shared ca certs ...
	I1213 20:21:14.436858   78367 certs.go:226] acquiring lock for ca certs: {Name:mka8994129240986519f4b0ac41f1e4e27ada985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:21:14.437055   78367 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key
	I1213 20:21:14.437161   78367 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key
	I1213 20:21:14.437183   78367 certs.go:256] generating profile certs ...
	I1213 20:21:14.437329   78367 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/client.key
	I1213 20:21:14.437397   78367 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/apiserver.key.60799339
	I1213 20:21:14.437438   78367 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/proxy-client.key
	I1213 20:21:14.437576   78367 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544.pem (1338 bytes)
	W1213 20:21:14.437606   78367 certs.go:480] ignoring /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544_empty.pem, impossibly tiny 0 bytes
	I1213 20:21:14.437621   78367 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem (1679 bytes)
	I1213 20:21:14.437655   78367 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem (1082 bytes)
	I1213 20:21:14.437684   78367 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem (1123 bytes)
	I1213 20:21:14.437724   78367 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem (1675 bytes)
	I1213 20:21:14.437778   78367 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem (1708 bytes)
	I1213 20:21:14.438445   78367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 20:21:14.493963   78367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 20:21:14.522754   78367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 20:21:14.553177   78367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 20:21:14.584366   78367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 20:21:14.614602   78367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 20:21:14.662038   78367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 20:21:14.688559   78367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/old-k8s-version-613355/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 20:21:14.713771   78367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem --> /usr/share/ca-certificates/195442.pem (1708 bytes)
	I1213 20:21:14.736588   78367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 20:21:14.758959   78367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544.pem --> /usr/share/ca-certificates/19544.pem (1338 bytes)
	I1213 20:21:14.781043   78367 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 20:21:14.797969   78367 ssh_runner.go:195] Run: openssl version
	I1213 20:21:14.803546   78367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/195442.pem && ln -fs /usr/share/ca-certificates/195442.pem /etc/ssl/certs/195442.pem"
	I1213 20:21:14.813965   78367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/195442.pem
	I1213 20:21:14.818067   78367 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 19:13 /usr/share/ca-certificates/195442.pem
	I1213 20:21:14.818130   78367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/195442.pem
	I1213 20:21:14.823656   78367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/195442.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 20:21:14.833683   78367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 20:21:14.843534   78367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:21:14.847756   78367 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:21:14.847802   78367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:21:14.853296   78367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 20:21:14.862966   78367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19544.pem && ln -fs /usr/share/ca-certificates/19544.pem /etc/ssl/certs/19544.pem"
	I1213 20:21:14.872694   78367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19544.pem
	I1213 20:21:14.876682   78367 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 19:13 /usr/share/ca-certificates/19544.pem
	I1213 20:21:14.876759   78367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19544.pem
	I1213 20:21:14.882445   78367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19544.pem /etc/ssl/certs/51391683.0"
	I1213 20:21:14.892920   78367 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 20:21:14.897172   78367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 20:21:14.902688   78367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 20:21:14.907869   78367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 20:21:14.913296   78367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 20:21:14.918690   78367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 20:21:14.923889   78367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 20:21:14.929964   78367 kubeadm.go:392] StartCluster: {Name:old-k8s-version-613355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-613355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:21:14.930076   78367 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 20:21:14.930131   78367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 20:21:14.967299   78367 cri.go:89] found id: ""
	I1213 20:21:14.967372   78367 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 20:21:14.977190   78367 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1213 20:21:14.977209   78367 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1213 20:21:14.977254   78367 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 20:21:14.986526   78367 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 20:21:14.987579   78367 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-613355" does not appear in /home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:21:14.988143   78367 kubeconfig.go:62] /home/jenkins/minikube-integration/20090-12353/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-613355" cluster setting kubeconfig missing "old-k8s-version-613355" context setting]
	I1213 20:21:14.988984   78367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/kubeconfig: {Name:mkeeacf16d2513309766df13b67a96dd252bc4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:21:15.012393   78367 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 20:21:15.022779   78367 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.134
	I1213 20:21:15.022806   78367 kubeadm.go:1160] stopping kube-system containers ...
	I1213 20:21:15.022817   78367 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 20:21:15.022891   78367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 20:21:15.062166   78367 cri.go:89] found id: ""
	I1213 20:21:15.062244   78367 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 20:21:15.080235   78367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:21:15.089826   78367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:21:15.089850   78367 kubeadm.go:157] found existing configuration files:
	
	I1213 20:21:15.089888   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:21:15.098529   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:21:15.098583   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:21:15.107527   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:21:15.116685   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:21:15.116762   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:21:15.126655   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:21:15.136273   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:21:15.136332   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:21:15.146688   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:21:15.156615   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:21:15.156667   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:21:15.166711   78367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 20:21:15.177129   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:21:15.295359   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:21:15.983153   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:21:16.199086   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:21:16.299152   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:21:16.399724   78367 api_server.go:52] waiting for apiserver process to appear ...
	I1213 20:21:16.399791   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:16.900358   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:17.400028   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:17.900835   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:18.399924   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:18.900146   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:19.399948   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:19.900075   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:20.400169   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:20.899939   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:21.399976   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:21.900041   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:22.399910   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:22.900046   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:23.400013   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:23.900524   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:24.400344   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:24.900106   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:25.400739   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:25.900670   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:26.400569   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:26.900149   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:27.400824   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:27.900213   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:28.400542   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:28.899916   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:29.400000   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:29.899919   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:30.400161   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:30.900630   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:31.400347   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:31.900494   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:32.399901   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:32.900310   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:33.400087   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:33.899903   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:34.399990   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:34.900794   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:35.400775   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:35.899852   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:36.400382   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:36.900024   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:37.400010   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:37.900909   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:38.400010   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:38.900631   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:39.399886   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:39.900360   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:40.400611   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:40.900009   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:41.400658   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:41.900306   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:42.400176   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:42.900719   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:43.400507   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:43.900042   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:44.400568   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:44.900293   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:45.400631   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:45.900060   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:46.399867   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:46.900212   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:47.400027   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:47.899844   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:48.399959   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:48.900811   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:49.400559   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:49.900735   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:50.399965   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:50.900875   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:51.400510   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:51.900304   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:52.399858   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:52.900101   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:53.400835   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:53.900840   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:54.400894   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:54.899906   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:55.400137   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:55.900529   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:56.400605   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:56.900191   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:57.400233   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:57.900821   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:58.400092   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:58.900652   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:59.400738   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:21:59.900205   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:00.400024   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:00.900278   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:01.400734   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:01.900219   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:02.400704   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:02.900607   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:03.399889   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:03.900613   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:04.400132   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:04.900747   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:05.400031   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:05.900756   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:06.400129   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:06.900299   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:07.399927   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:07.900193   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:08.400633   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:08.900898   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:09.400303   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:09.900281   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:10.399934   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:10.899918   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:11.400660   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:11.900051   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:12.400814   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:12.900027   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:13.400844   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:13.900696   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:14.400081   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:14.900621   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:15.400737   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:15.900054   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:16.400730   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:22:16.400797   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:22:16.436901   78367 cri.go:89] found id: ""
	I1213 20:22:16.436935   78367 logs.go:282] 0 containers: []
	W1213 20:22:16.436947   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:22:16.436956   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:22:16.437025   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:22:16.477386   78367 cri.go:89] found id: ""
	I1213 20:22:16.477413   78367 logs.go:282] 0 containers: []
	W1213 20:22:16.477420   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:22:16.477425   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:22:16.477472   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:22:16.508039   78367 cri.go:89] found id: ""
	I1213 20:22:16.508063   78367 logs.go:282] 0 containers: []
	W1213 20:22:16.508079   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:22:16.508084   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:22:16.508138   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:22:16.547677   78367 cri.go:89] found id: ""
	I1213 20:22:16.547704   78367 logs.go:282] 0 containers: []
	W1213 20:22:16.547714   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:22:16.547721   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:22:16.547779   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:22:16.585732   78367 cri.go:89] found id: ""
	I1213 20:22:16.585759   78367 logs.go:282] 0 containers: []
	W1213 20:22:16.585768   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:22:16.585774   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:22:16.585833   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:22:16.622152   78367 cri.go:89] found id: ""
	I1213 20:22:16.622177   78367 logs.go:282] 0 containers: []
	W1213 20:22:16.622186   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:22:16.622193   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:22:16.622263   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:22:16.659506   78367 cri.go:89] found id: ""
	I1213 20:22:16.659531   78367 logs.go:282] 0 containers: []
	W1213 20:22:16.659538   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:22:16.659543   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:22:16.659594   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:22:16.696574   78367 cri.go:89] found id: ""
	I1213 20:22:16.696602   78367 logs.go:282] 0 containers: []
	W1213 20:22:16.696612   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:22:16.696622   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:22:16.696636   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:22:16.757759   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:22:16.757788   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:22:16.771796   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:22:16.771831   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:22:16.909436   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:22:16.909463   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:22:16.909477   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:22:16.993440   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:22:16.993474   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:22:19.536055   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:19.549182   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:22:19.549237   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:22:19.582966   78367 cri.go:89] found id: ""
	I1213 20:22:19.582995   78367 logs.go:282] 0 containers: []
	W1213 20:22:19.583002   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:22:19.583007   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:22:19.583061   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:22:19.614910   78367 cri.go:89] found id: ""
	I1213 20:22:19.614940   78367 logs.go:282] 0 containers: []
	W1213 20:22:19.614951   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:22:19.614959   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:22:19.615094   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:22:19.667840   78367 cri.go:89] found id: ""
	I1213 20:22:19.667866   78367 logs.go:282] 0 containers: []
	W1213 20:22:19.667874   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:22:19.667879   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:22:19.667937   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:22:19.700212   78367 cri.go:89] found id: ""
	I1213 20:22:19.700237   78367 logs.go:282] 0 containers: []
	W1213 20:22:19.700244   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:22:19.700249   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:22:19.700297   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:22:19.730726   78367 cri.go:89] found id: ""
	I1213 20:22:19.730756   78367 logs.go:282] 0 containers: []
	W1213 20:22:19.730765   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:22:19.730771   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:22:19.730833   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:22:19.761041   78367 cri.go:89] found id: ""
	I1213 20:22:19.761070   78367 logs.go:282] 0 containers: []
	W1213 20:22:19.761079   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:22:19.761086   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:22:19.761132   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:22:19.798268   78367 cri.go:89] found id: ""
	I1213 20:22:19.798293   78367 logs.go:282] 0 containers: []
	W1213 20:22:19.798300   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:22:19.798305   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:22:19.798355   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:22:19.830869   78367 cri.go:89] found id: ""
	I1213 20:22:19.830895   78367 logs.go:282] 0 containers: []
	W1213 20:22:19.830903   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:22:19.830911   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:22:19.830921   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:22:19.880831   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:22:19.880861   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:22:19.893520   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:22:19.893543   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:22:19.966320   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:22:19.966347   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:22:19.966365   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:22:20.044814   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:22:20.044846   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:22:22.583898   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:22.596125   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:22:22.596189   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:22:22.629483   78367 cri.go:89] found id: ""
	I1213 20:22:22.629510   78367 logs.go:282] 0 containers: []
	W1213 20:22:22.629517   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:22:22.629523   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:22:22.629572   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:22:22.661264   78367 cri.go:89] found id: ""
	I1213 20:22:22.661287   78367 logs.go:282] 0 containers: []
	W1213 20:22:22.661298   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:22:22.661306   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:22:22.661365   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:22:22.694071   78367 cri.go:89] found id: ""
	I1213 20:22:22.694099   78367 logs.go:282] 0 containers: []
	W1213 20:22:22.694109   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:22:22.694116   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:22:22.694178   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:22:22.724909   78367 cri.go:89] found id: ""
	I1213 20:22:22.724940   78367 logs.go:282] 0 containers: []
	W1213 20:22:22.724951   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:22:22.724959   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:22:22.725029   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:22:22.755783   78367 cri.go:89] found id: ""
	I1213 20:22:22.755808   78367 logs.go:282] 0 containers: []
	W1213 20:22:22.755815   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:22:22.755821   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:22:22.755884   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:22:22.787845   78367 cri.go:89] found id: ""
	I1213 20:22:22.787876   78367 logs.go:282] 0 containers: []
	W1213 20:22:22.787887   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:22:22.787895   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:22:22.787940   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:22:22.819767   78367 cri.go:89] found id: ""
	I1213 20:22:22.819794   78367 logs.go:282] 0 containers: []
	W1213 20:22:22.819801   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:22:22.819807   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:22:22.819852   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:22:22.853548   78367 cri.go:89] found id: ""
	I1213 20:22:22.853571   78367 logs.go:282] 0 containers: []
	W1213 20:22:22.853579   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:22:22.853586   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:22:22.853600   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:22:22.888883   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:22:22.888914   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:22:22.944832   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:22:22.944862   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:22:22.959706   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:22:22.959730   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:22:23.034834   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:22:23.034883   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:22:23.034903   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:22:25.613963   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:25.626373   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:22:25.626435   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:22:25.659941   78367 cri.go:89] found id: ""
	I1213 20:22:25.659971   78367 logs.go:282] 0 containers: []
	W1213 20:22:25.659981   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:22:25.659989   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:22:25.660114   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:22:25.692893   78367 cri.go:89] found id: ""
	I1213 20:22:25.692921   78367 logs.go:282] 0 containers: []
	W1213 20:22:25.692931   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:22:25.692938   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:22:25.693003   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:22:25.724909   78367 cri.go:89] found id: ""
	I1213 20:22:25.724938   78367 logs.go:282] 0 containers: []
	W1213 20:22:25.724946   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:22:25.724952   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:22:25.724998   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:22:25.756929   78367 cri.go:89] found id: ""
	I1213 20:22:25.756951   78367 logs.go:282] 0 containers: []
	W1213 20:22:25.756958   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:22:25.756964   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:22:25.757020   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:22:25.793503   78367 cri.go:89] found id: ""
	I1213 20:22:25.793527   78367 logs.go:282] 0 containers: []
	W1213 20:22:25.793534   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:22:25.793541   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:22:25.793589   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:22:25.825949   78367 cri.go:89] found id: ""
	I1213 20:22:25.825972   78367 logs.go:282] 0 containers: []
	W1213 20:22:25.825979   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:22:25.825985   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:22:25.826039   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:22:25.856536   78367 cri.go:89] found id: ""
	I1213 20:22:25.856570   78367 logs.go:282] 0 containers: []
	W1213 20:22:25.856581   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:22:25.856588   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:22:25.856651   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:22:25.890462   78367 cri.go:89] found id: ""
	I1213 20:22:25.890488   78367 logs.go:282] 0 containers: []
	W1213 20:22:25.890495   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:22:25.890505   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:22:25.890515   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:22:25.903454   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:22:25.903478   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:22:25.972659   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:22:25.972682   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:22:25.972693   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:22:26.059581   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:22:26.059622   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:22:26.116435   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:22:26.116470   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:22:28.672535   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:28.686457   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:22:28.686535   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:22:28.723508   78367 cri.go:89] found id: ""
	I1213 20:22:28.723537   78367 logs.go:282] 0 containers: []
	W1213 20:22:28.723544   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:22:28.723550   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:22:28.723596   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:22:28.759251   78367 cri.go:89] found id: ""
	I1213 20:22:28.759282   78367 logs.go:282] 0 containers: []
	W1213 20:22:28.759293   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:22:28.759301   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:22:28.759365   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:22:28.792959   78367 cri.go:89] found id: ""
	I1213 20:22:28.792987   78367 logs.go:282] 0 containers: []
	W1213 20:22:28.792997   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:22:28.793005   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:22:28.793060   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:22:28.824215   78367 cri.go:89] found id: ""
	I1213 20:22:28.824241   78367 logs.go:282] 0 containers: []
	W1213 20:22:28.824249   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:22:28.824255   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:22:28.824311   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:22:28.856182   78367 cri.go:89] found id: ""
	I1213 20:22:28.856210   78367 logs.go:282] 0 containers: []
	W1213 20:22:28.856220   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:22:28.856228   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:22:28.856298   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:22:28.888072   78367 cri.go:89] found id: ""
	I1213 20:22:28.888114   78367 logs.go:282] 0 containers: []
	W1213 20:22:28.888126   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:22:28.888136   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:22:28.888185   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:22:28.920128   78367 cri.go:89] found id: ""
	I1213 20:22:28.920161   78367 logs.go:282] 0 containers: []
	W1213 20:22:28.920172   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:22:28.920180   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:22:28.920238   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:22:28.956370   78367 cri.go:89] found id: ""
	I1213 20:22:28.956401   78367 logs.go:282] 0 containers: []
	W1213 20:22:28.956413   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:22:28.956425   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:22:28.956457   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:22:29.034742   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:22:29.034776   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:22:29.073635   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:22:29.073669   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:22:29.125538   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:22:29.125565   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:22:29.137498   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:22:29.137521   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:22:29.213199   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:22:31.713724   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:31.726089   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:22:31.726169   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:22:31.759053   78367 cri.go:89] found id: ""
	I1213 20:22:31.759091   78367 logs.go:282] 0 containers: []
	W1213 20:22:31.759102   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:22:31.759111   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:22:31.759172   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:22:31.793744   78367 cri.go:89] found id: ""
	I1213 20:22:31.793769   78367 logs.go:282] 0 containers: []
	W1213 20:22:31.793778   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:22:31.793784   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:22:31.793845   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:22:31.829909   78367 cri.go:89] found id: ""
	I1213 20:22:31.829937   78367 logs.go:282] 0 containers: []
	W1213 20:22:31.829945   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:22:31.829950   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:22:31.830004   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:22:31.871059   78367 cri.go:89] found id: ""
	I1213 20:22:31.871091   78367 logs.go:282] 0 containers: []
	W1213 20:22:31.871100   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:22:31.871108   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:22:31.871171   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:22:31.918835   78367 cri.go:89] found id: ""
	I1213 20:22:31.918886   78367 logs.go:282] 0 containers: []
	W1213 20:22:31.918896   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:22:31.918903   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:22:31.918963   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:22:31.951208   78367 cri.go:89] found id: ""
	I1213 20:22:31.951240   78367 logs.go:282] 0 containers: []
	W1213 20:22:31.951248   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:22:31.951254   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:22:31.951313   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:22:31.982142   78367 cri.go:89] found id: ""
	I1213 20:22:31.982176   78367 logs.go:282] 0 containers: []
	W1213 20:22:31.982184   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:22:31.982190   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:22:31.982253   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:22:32.012335   78367 cri.go:89] found id: ""
	I1213 20:22:32.012373   78367 logs.go:282] 0 containers: []
	W1213 20:22:32.012381   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:22:32.012389   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:22:32.012410   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:22:32.061153   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:22:32.061188   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:22:32.074011   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:22:32.074050   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:22:32.141594   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:22:32.141616   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:22:32.141629   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:22:32.224408   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:22:32.224442   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:22:34.759872   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:34.774692   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:22:34.774752   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:22:34.811229   78367 cri.go:89] found id: ""
	I1213 20:22:34.811258   78367 logs.go:282] 0 containers: []
	W1213 20:22:34.811268   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:22:34.811276   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:22:34.811333   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:22:34.844716   78367 cri.go:89] found id: ""
	I1213 20:22:34.844738   78367 logs.go:282] 0 containers: []
	W1213 20:22:34.844746   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:22:34.844751   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:22:34.844794   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:22:34.880736   78367 cri.go:89] found id: ""
	I1213 20:22:34.880768   78367 logs.go:282] 0 containers: []
	W1213 20:22:34.880777   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:22:34.880783   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:22:34.880836   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:22:34.916711   78367 cri.go:89] found id: ""
	I1213 20:22:34.916734   78367 logs.go:282] 0 containers: []
	W1213 20:22:34.916741   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:22:34.916751   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:22:34.916820   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:22:34.956775   78367 cri.go:89] found id: ""
	I1213 20:22:34.956806   78367 logs.go:282] 0 containers: []
	W1213 20:22:34.956815   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:22:34.956820   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:22:34.956878   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:22:34.995950   78367 cri.go:89] found id: ""
	I1213 20:22:34.995976   78367 logs.go:282] 0 containers: []
	W1213 20:22:34.995984   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:22:34.995990   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:22:34.996057   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:22:35.030174   78367 cri.go:89] found id: ""
	I1213 20:22:35.030200   78367 logs.go:282] 0 containers: []
	W1213 20:22:35.030209   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:22:35.030215   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:22:35.030261   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:22:35.062643   78367 cri.go:89] found id: ""
	I1213 20:22:35.062669   78367 logs.go:282] 0 containers: []
	W1213 20:22:35.062680   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:22:35.062692   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:22:35.062706   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:22:35.133552   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:22:35.133576   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:22:35.133590   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:22:35.213930   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:22:35.213965   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:22:35.250405   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:22:35.250430   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:22:35.302764   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:22:35.302796   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:22:37.817902   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:37.832136   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:22:37.832194   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:22:37.879637   78367 cri.go:89] found id: ""
	I1213 20:22:37.879664   78367 logs.go:282] 0 containers: []
	W1213 20:22:37.879674   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:22:37.879682   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:22:37.879743   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:22:37.939938   78367 cri.go:89] found id: ""
	I1213 20:22:37.939970   78367 logs.go:282] 0 containers: []
	W1213 20:22:37.939978   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:22:37.939984   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:22:37.940040   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:22:37.984035   78367 cri.go:89] found id: ""
	I1213 20:22:37.984064   78367 logs.go:282] 0 containers: []
	W1213 20:22:37.984074   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:22:37.984083   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:22:37.984134   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:22:38.028531   78367 cri.go:89] found id: ""
	I1213 20:22:38.028559   78367 logs.go:282] 0 containers: []
	W1213 20:22:38.028569   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:22:38.028579   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:22:38.028629   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:22:38.077582   78367 cri.go:89] found id: ""
	I1213 20:22:38.077601   78367 logs.go:282] 0 containers: []
	W1213 20:22:38.077608   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:22:38.077613   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:22:38.077654   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:22:38.120304   78367 cri.go:89] found id: ""
	I1213 20:22:38.120341   78367 logs.go:282] 0 containers: []
	W1213 20:22:38.120350   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:22:38.120358   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:22:38.120417   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:22:38.159152   78367 cri.go:89] found id: ""
	I1213 20:22:38.159182   78367 logs.go:282] 0 containers: []
	W1213 20:22:38.159193   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:22:38.159201   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:22:38.159249   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:22:38.193902   78367 cri.go:89] found id: ""
	I1213 20:22:38.193928   78367 logs.go:282] 0 containers: []
	W1213 20:22:38.193935   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:22:38.193944   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:22:38.193955   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:22:38.266214   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:22:38.266238   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:22:38.266252   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:22:38.356606   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:22:38.356644   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:22:38.398060   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:22:38.398091   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:22:38.455110   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:22:38.455138   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:22:40.969215   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:40.982471   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:22:40.982544   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:22:41.017186   78367 cri.go:89] found id: ""
	I1213 20:22:41.017213   78367 logs.go:282] 0 containers: []
	W1213 20:22:41.017222   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:22:41.017228   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:22:41.017287   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:22:41.050459   78367 cri.go:89] found id: ""
	I1213 20:22:41.050490   78367 logs.go:282] 0 containers: []
	W1213 20:22:41.050502   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:22:41.050510   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:22:41.050561   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:22:41.095814   78367 cri.go:89] found id: ""
	I1213 20:22:41.095842   78367 logs.go:282] 0 containers: []
	W1213 20:22:41.095852   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:22:41.095860   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:22:41.095914   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:22:41.128167   78367 cri.go:89] found id: ""
	I1213 20:22:41.128195   78367 logs.go:282] 0 containers: []
	W1213 20:22:41.128205   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:22:41.128212   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:22:41.128266   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:22:41.159569   78367 cri.go:89] found id: ""
	I1213 20:22:41.159596   78367 logs.go:282] 0 containers: []
	W1213 20:22:41.159611   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:22:41.159618   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:22:41.159671   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:22:41.196808   78367 cri.go:89] found id: ""
	I1213 20:22:41.196838   78367 logs.go:282] 0 containers: []
	W1213 20:22:41.196849   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:22:41.196856   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:22:41.196914   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:22:41.234240   78367 cri.go:89] found id: ""
	I1213 20:22:41.234267   78367 logs.go:282] 0 containers: []
	W1213 20:22:41.234278   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:22:41.234285   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:22:41.234335   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:22:41.273869   78367 cri.go:89] found id: ""
	I1213 20:22:41.273897   78367 logs.go:282] 0 containers: []
	W1213 20:22:41.273908   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:22:41.273918   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:22:41.273946   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:22:41.324611   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:22:41.324638   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:22:41.337770   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:22:41.337797   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:22:41.406372   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:22:41.406402   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:22:41.406418   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:22:41.504370   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:22:41.504397   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:22:44.045759   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:44.059896   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:22:44.059957   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:22:44.097336   78367 cri.go:89] found id: ""
	I1213 20:22:44.097370   78367 logs.go:282] 0 containers: []
	W1213 20:22:44.097381   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:22:44.097388   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:22:44.097457   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:22:44.132189   78367 cri.go:89] found id: ""
	I1213 20:22:44.132222   78367 logs.go:282] 0 containers: []
	W1213 20:22:44.132231   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:22:44.132237   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:22:44.132291   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:22:44.170551   78367 cri.go:89] found id: ""
	I1213 20:22:44.170579   78367 logs.go:282] 0 containers: []
	W1213 20:22:44.170589   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:22:44.170595   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:22:44.170656   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:22:44.207083   78367 cri.go:89] found id: ""
	I1213 20:22:44.207111   78367 logs.go:282] 0 containers: []
	W1213 20:22:44.207119   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:22:44.207130   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:22:44.207176   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:22:44.237746   78367 cri.go:89] found id: ""
	I1213 20:22:44.237780   78367 logs.go:282] 0 containers: []
	W1213 20:22:44.237791   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:22:44.237799   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:22:44.237859   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:22:44.269158   78367 cri.go:89] found id: ""
	I1213 20:22:44.269183   78367 logs.go:282] 0 containers: []
	W1213 20:22:44.269191   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:22:44.269197   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:22:44.269259   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:22:44.303396   78367 cri.go:89] found id: ""
	I1213 20:22:44.303423   78367 logs.go:282] 0 containers: []
	W1213 20:22:44.303431   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:22:44.303438   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:22:44.303503   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:22:44.337947   78367 cri.go:89] found id: ""
	I1213 20:22:44.337969   78367 logs.go:282] 0 containers: []
	W1213 20:22:44.337976   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:22:44.337984   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:22:44.337997   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:22:44.388952   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:22:44.388982   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:22:44.402111   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:22:44.402137   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:22:44.463451   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:22:44.463476   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:22:44.463490   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:22:44.545662   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:22:44.545698   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:22:47.082916   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:47.095921   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:22:47.095988   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:22:47.132765   78367 cri.go:89] found id: ""
	I1213 20:22:47.132795   78367 logs.go:282] 0 containers: []
	W1213 20:22:47.132805   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:22:47.132815   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:22:47.132878   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:22:47.166143   78367 cri.go:89] found id: ""
	I1213 20:22:47.166174   78367 logs.go:282] 0 containers: []
	W1213 20:22:47.166184   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:22:47.166192   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:22:47.166251   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:22:47.199318   78367 cri.go:89] found id: ""
	I1213 20:22:47.199346   78367 logs.go:282] 0 containers: []
	W1213 20:22:47.199357   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:22:47.199365   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:22:47.199412   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:22:47.232565   78367 cri.go:89] found id: ""
	I1213 20:22:47.232594   78367 logs.go:282] 0 containers: []
	W1213 20:22:47.232602   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:22:47.232609   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:22:47.232672   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:22:47.264065   78367 cri.go:89] found id: ""
	I1213 20:22:47.264097   78367 logs.go:282] 0 containers: []
	W1213 20:22:47.264108   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:22:47.264115   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:22:47.264175   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:22:47.294970   78367 cri.go:89] found id: ""
	I1213 20:22:47.295005   78367 logs.go:282] 0 containers: []
	W1213 20:22:47.295016   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:22:47.295025   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:22:47.295075   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:22:47.329805   78367 cri.go:89] found id: ""
	I1213 20:22:47.329835   78367 logs.go:282] 0 containers: []
	W1213 20:22:47.329844   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:22:47.329851   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:22:47.329906   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:22:47.361551   78367 cri.go:89] found id: ""
	I1213 20:22:47.361575   78367 logs.go:282] 0 containers: []
	W1213 20:22:47.361583   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:22:47.361593   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:22:47.361605   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:22:47.413635   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:22:47.413668   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:22:47.427394   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:22:47.427425   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:22:47.496721   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:22:47.496750   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:22:47.496765   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:22:47.577390   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:22:47.577424   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:22:50.114236   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:50.128324   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:22:50.128395   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:22:50.165111   78367 cri.go:89] found id: ""
	I1213 20:22:50.165136   78367 logs.go:282] 0 containers: []
	W1213 20:22:50.165146   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:22:50.165155   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:22:50.165209   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:22:50.197945   78367 cri.go:89] found id: ""
	I1213 20:22:50.197975   78367 logs.go:282] 0 containers: []
	W1213 20:22:50.197986   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:22:50.197994   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:22:50.198058   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:22:50.235374   78367 cri.go:89] found id: ""
	I1213 20:22:50.235403   78367 logs.go:282] 0 containers: []
	W1213 20:22:50.235413   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:22:50.235421   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:22:50.235489   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:22:50.271392   78367 cri.go:89] found id: ""
	I1213 20:22:50.271422   78367 logs.go:282] 0 containers: []
	W1213 20:22:50.271432   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:22:50.271440   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:22:50.271507   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:22:50.307806   78367 cri.go:89] found id: ""
	I1213 20:22:50.307835   78367 logs.go:282] 0 containers: []
	W1213 20:22:50.307846   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:22:50.307853   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:22:50.307920   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:22:50.348197   78367 cri.go:89] found id: ""
	I1213 20:22:50.348228   78367 logs.go:282] 0 containers: []
	W1213 20:22:50.348239   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:22:50.348247   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:22:50.348313   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:22:50.387033   78367 cri.go:89] found id: ""
	I1213 20:22:50.387061   78367 logs.go:282] 0 containers: []
	W1213 20:22:50.387071   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:22:50.387078   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:22:50.387144   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:22:50.420514   78367 cri.go:89] found id: ""
	I1213 20:22:50.420546   78367 logs.go:282] 0 containers: []
	W1213 20:22:50.420557   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:22:50.420568   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:22:50.420584   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:22:50.464180   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:22:50.464205   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:22:50.515499   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:22:50.515533   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:22:50.533324   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:22:50.533356   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:22:50.606275   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:22:50.606302   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:22:50.606318   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:22:53.191576   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:53.203917   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:22:53.203991   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:22:53.233692   78367 cri.go:89] found id: ""
	I1213 20:22:53.233719   78367 logs.go:282] 0 containers: []
	W1213 20:22:53.233729   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:22:53.233737   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:22:53.233791   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:22:53.263402   78367 cri.go:89] found id: ""
	I1213 20:22:53.263433   78367 logs.go:282] 0 containers: []
	W1213 20:22:53.263445   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:22:53.263453   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:22:53.263509   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:22:53.293768   78367 cri.go:89] found id: ""
	I1213 20:22:53.293798   78367 logs.go:282] 0 containers: []
	W1213 20:22:53.293807   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:22:53.293813   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:22:53.293860   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:22:53.325326   78367 cri.go:89] found id: ""
	I1213 20:22:53.325350   78367 logs.go:282] 0 containers: []
	W1213 20:22:53.325357   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:22:53.325362   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:22:53.325409   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:22:53.356120   78367 cri.go:89] found id: ""
	I1213 20:22:53.356151   78367 logs.go:282] 0 containers: []
	W1213 20:22:53.356162   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:22:53.356170   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:22:53.356231   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:22:53.387154   78367 cri.go:89] found id: ""
	I1213 20:22:53.387182   78367 logs.go:282] 0 containers: []
	W1213 20:22:53.387192   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:22:53.387200   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:22:53.387276   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:22:53.419037   78367 cri.go:89] found id: ""
	I1213 20:22:53.419063   78367 logs.go:282] 0 containers: []
	W1213 20:22:53.419074   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:22:53.419081   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:22:53.419139   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:22:53.453840   78367 cri.go:89] found id: ""
	I1213 20:22:53.453869   78367 logs.go:282] 0 containers: []
	W1213 20:22:53.453877   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:22:53.453886   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:22:53.453896   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:22:53.521687   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:22:53.521708   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:22:53.521724   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:22:53.603476   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:22:53.603507   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:22:53.640850   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:22:53.640876   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:22:53.695978   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:22:53.696007   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:22:56.209419   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:56.222185   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:22:56.222238   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:22:56.255635   78367 cri.go:89] found id: ""
	I1213 20:22:56.255662   78367 logs.go:282] 0 containers: []
	W1213 20:22:56.255670   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:22:56.255675   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:22:56.255733   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:22:56.292141   78367 cri.go:89] found id: ""
	I1213 20:22:56.292169   78367 logs.go:282] 0 containers: []
	W1213 20:22:56.292180   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:22:56.292186   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:22:56.292238   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:22:56.333407   78367 cri.go:89] found id: ""
	I1213 20:22:56.333430   78367 logs.go:282] 0 containers: []
	W1213 20:22:56.333437   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:22:56.333443   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:22:56.333493   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:22:56.370651   78367 cri.go:89] found id: ""
	I1213 20:22:56.370683   78367 logs.go:282] 0 containers: []
	W1213 20:22:56.370694   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:22:56.370701   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:22:56.370766   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:22:56.402757   78367 cri.go:89] found id: ""
	I1213 20:22:56.402787   78367 logs.go:282] 0 containers: []
	W1213 20:22:56.402795   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:22:56.402801   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:22:56.402862   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:22:56.437816   78367 cri.go:89] found id: ""
	I1213 20:22:56.437847   78367 logs.go:282] 0 containers: []
	W1213 20:22:56.437858   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:22:56.437866   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:22:56.437923   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:22:56.472251   78367 cri.go:89] found id: ""
	I1213 20:22:56.472279   78367 logs.go:282] 0 containers: []
	W1213 20:22:56.472291   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:22:56.472299   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:22:56.472357   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:22:56.504081   78367 cri.go:89] found id: ""
	I1213 20:22:56.504110   78367 logs.go:282] 0 containers: []
	W1213 20:22:56.504118   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:22:56.504126   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:22:56.504137   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:22:56.516330   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:22:56.516353   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:22:56.586242   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:22:56.586263   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:22:56.586276   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:22:56.667903   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:22:56.667933   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:22:56.707517   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:22:56.707550   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:22:59.253492   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:22:59.266933   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:22:59.267002   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:22:59.297667   78367 cri.go:89] found id: ""
	I1213 20:22:59.297696   78367 logs.go:282] 0 containers: []
	W1213 20:22:59.297704   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:22:59.297709   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:22:59.297764   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:22:59.330936   78367 cri.go:89] found id: ""
	I1213 20:22:59.330968   78367 logs.go:282] 0 containers: []
	W1213 20:22:59.330979   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:22:59.330987   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:22:59.331047   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:22:59.362618   78367 cri.go:89] found id: ""
	I1213 20:22:59.362649   78367 logs.go:282] 0 containers: []
	W1213 20:22:59.362659   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:22:59.362669   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:22:59.362727   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:22:59.394347   78367 cri.go:89] found id: ""
	I1213 20:22:59.394376   78367 logs.go:282] 0 containers: []
	W1213 20:22:59.394386   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:22:59.394418   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:22:59.394473   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:22:59.426591   78367 cri.go:89] found id: ""
	I1213 20:22:59.426625   78367 logs.go:282] 0 containers: []
	W1213 20:22:59.426636   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:22:59.426644   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:22:59.426704   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:22:59.457769   78367 cri.go:89] found id: ""
	I1213 20:22:59.457797   78367 logs.go:282] 0 containers: []
	W1213 20:22:59.457805   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:22:59.457811   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:22:59.457857   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:22:59.489673   78367 cri.go:89] found id: ""
	I1213 20:22:59.489701   78367 logs.go:282] 0 containers: []
	W1213 20:22:59.489711   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:22:59.489717   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:22:59.489777   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:22:59.525124   78367 cri.go:89] found id: ""
	I1213 20:22:59.525154   78367 logs.go:282] 0 containers: []
	W1213 20:22:59.525163   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:22:59.525173   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:22:59.525187   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:22:59.573335   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:22:59.573366   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:22:59.586744   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:22:59.586772   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:22:59.656913   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:22:59.656936   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:22:59.656952   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:22:59.735975   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:22:59.736016   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:02.273319   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:02.287406   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:02.287472   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:02.320420   78367 cri.go:89] found id: ""
	I1213 20:23:02.320448   78367 logs.go:282] 0 containers: []
	W1213 20:23:02.320455   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:02.320463   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:02.320526   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:02.350888   78367 cri.go:89] found id: ""
	I1213 20:23:02.350912   78367 logs.go:282] 0 containers: []
	W1213 20:23:02.350919   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:02.350925   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:02.350972   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:02.381318   78367 cri.go:89] found id: ""
	I1213 20:23:02.381352   78367 logs.go:282] 0 containers: []
	W1213 20:23:02.381363   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:02.381370   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:02.381431   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:02.418557   78367 cri.go:89] found id: ""
	I1213 20:23:02.418583   78367 logs.go:282] 0 containers: []
	W1213 20:23:02.418591   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:02.418597   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:02.418646   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:02.452997   78367 cri.go:89] found id: ""
	I1213 20:23:02.453022   78367 logs.go:282] 0 containers: []
	W1213 20:23:02.453029   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:02.453035   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:02.453097   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:02.482888   78367 cri.go:89] found id: ""
	I1213 20:23:02.482922   78367 logs.go:282] 0 containers: []
	W1213 20:23:02.482933   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:02.482941   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:02.483000   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:02.515710   78367 cri.go:89] found id: ""
	I1213 20:23:02.515735   78367 logs.go:282] 0 containers: []
	W1213 20:23:02.515742   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:02.515747   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:02.515799   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:02.551662   78367 cri.go:89] found id: ""
	I1213 20:23:02.551689   78367 logs.go:282] 0 containers: []
	W1213 20:23:02.551696   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:02.551707   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:02.551717   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:02.599868   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:02.599898   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:02.613924   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:02.613949   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:02.689881   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:02.689907   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:02.689918   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:02.765065   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:02.765099   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:05.302207   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:05.317842   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:05.317908   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:05.350951   78367 cri.go:89] found id: ""
	I1213 20:23:05.350984   78367 logs.go:282] 0 containers: []
	W1213 20:23:05.350995   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:05.351002   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:05.351058   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:05.382286   78367 cri.go:89] found id: ""
	I1213 20:23:05.382317   78367 logs.go:282] 0 containers: []
	W1213 20:23:05.382332   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:05.382339   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:05.382399   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:05.413354   78367 cri.go:89] found id: ""
	I1213 20:23:05.413384   78367 logs.go:282] 0 containers: []
	W1213 20:23:05.413394   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:05.413402   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:05.413455   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:05.446993   78367 cri.go:89] found id: ""
	I1213 20:23:05.447024   78367 logs.go:282] 0 containers: []
	W1213 20:23:05.447035   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:05.447043   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:05.447103   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:05.479802   78367 cri.go:89] found id: ""
	I1213 20:23:05.479836   78367 logs.go:282] 0 containers: []
	W1213 20:23:05.479848   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:05.479856   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:05.479921   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:05.510878   78367 cri.go:89] found id: ""
	I1213 20:23:05.510908   78367 logs.go:282] 0 containers: []
	W1213 20:23:05.510924   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:05.510932   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:05.510994   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:05.544016   78367 cri.go:89] found id: ""
	I1213 20:23:05.544043   78367 logs.go:282] 0 containers: []
	W1213 20:23:05.544054   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:05.544066   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:05.544108   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:05.581369   78367 cri.go:89] found id: ""
	I1213 20:23:05.581391   78367 logs.go:282] 0 containers: []
	W1213 20:23:05.581400   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:05.581410   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:05.581424   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:05.637728   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:05.637751   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:05.651581   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:05.651602   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:05.733951   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:05.733973   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:05.733986   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:05.829328   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:05.829361   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:08.369903   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:08.383976   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:08.384034   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:08.424988   78367 cri.go:89] found id: ""
	I1213 20:23:08.425014   78367 logs.go:282] 0 containers: []
	W1213 20:23:08.425021   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:08.425027   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:08.425071   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:08.467617   78367 cri.go:89] found id: ""
	I1213 20:23:08.467650   78367 logs.go:282] 0 containers: []
	W1213 20:23:08.467660   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:08.467668   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:08.467732   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:08.507668   78367 cri.go:89] found id: ""
	I1213 20:23:08.507694   78367 logs.go:282] 0 containers: []
	W1213 20:23:08.507703   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:08.507709   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:08.507762   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:08.556307   78367 cri.go:89] found id: ""
	I1213 20:23:08.556335   78367 logs.go:282] 0 containers: []
	W1213 20:23:08.556344   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:08.556350   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:08.556400   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:08.593561   78367 cri.go:89] found id: ""
	I1213 20:23:08.593593   78367 logs.go:282] 0 containers: []
	W1213 20:23:08.593605   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:08.593614   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:08.593676   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:08.633734   78367 cri.go:89] found id: ""
	I1213 20:23:08.633765   78367 logs.go:282] 0 containers: []
	W1213 20:23:08.633776   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:08.633784   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:08.633846   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:08.676703   78367 cri.go:89] found id: ""
	I1213 20:23:08.676738   78367 logs.go:282] 0 containers: []
	W1213 20:23:08.676749   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:08.676757   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:08.676815   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:08.718434   78367 cri.go:89] found id: ""
	I1213 20:23:08.718466   78367 logs.go:282] 0 containers: []
	W1213 20:23:08.718478   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:08.718489   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:08.718503   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:08.757890   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:08.757918   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:08.827690   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:08.827734   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:08.841874   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:08.841902   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:08.921349   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:08.921391   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:08.921404   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:11.522797   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:11.535791   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:11.535863   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:11.567963   78367 cri.go:89] found id: ""
	I1213 20:23:11.567987   78367 logs.go:282] 0 containers: []
	W1213 20:23:11.567995   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:11.568001   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:11.568064   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:11.600310   78367 cri.go:89] found id: ""
	I1213 20:23:11.600337   78367 logs.go:282] 0 containers: []
	W1213 20:23:11.600348   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:11.600355   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:11.600417   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:11.636773   78367 cri.go:89] found id: ""
	I1213 20:23:11.636798   78367 logs.go:282] 0 containers: []
	W1213 20:23:11.636809   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:11.636819   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:11.636874   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:11.672336   78367 cri.go:89] found id: ""
	I1213 20:23:11.672363   78367 logs.go:282] 0 containers: []
	W1213 20:23:11.672373   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:11.672381   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:11.672439   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:11.706606   78367 cri.go:89] found id: ""
	I1213 20:23:11.706636   78367 logs.go:282] 0 containers: []
	W1213 20:23:11.706647   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:11.706655   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:11.706718   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:11.740134   78367 cri.go:89] found id: ""
	I1213 20:23:11.740161   78367 logs.go:282] 0 containers: []
	W1213 20:23:11.740184   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:11.740194   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:11.740262   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:11.779954   78367 cri.go:89] found id: ""
	I1213 20:23:11.779984   78367 logs.go:282] 0 containers: []
	W1213 20:23:11.779994   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:11.779999   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:11.780049   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:11.814393   78367 cri.go:89] found id: ""
	I1213 20:23:11.814421   78367 logs.go:282] 0 containers: []
	W1213 20:23:11.814431   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:11.814440   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:11.814454   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:11.865223   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:11.865254   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:11.878104   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:11.878130   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:11.962741   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:11.962778   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:11.962794   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:12.037718   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:12.037747   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:14.578632   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:14.591877   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:14.591965   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:14.628340   78367 cri.go:89] found id: ""
	I1213 20:23:14.628369   78367 logs.go:282] 0 containers: []
	W1213 20:23:14.628379   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:14.628387   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:14.628447   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:14.674214   78367 cri.go:89] found id: ""
	I1213 20:23:14.674243   78367 logs.go:282] 0 containers: []
	W1213 20:23:14.674251   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:14.674257   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:14.674313   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:14.711398   78367 cri.go:89] found id: ""
	I1213 20:23:14.711426   78367 logs.go:282] 0 containers: []
	W1213 20:23:14.711435   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:14.711442   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:14.711524   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:14.748159   78367 cri.go:89] found id: ""
	I1213 20:23:14.748188   78367 logs.go:282] 0 containers: []
	W1213 20:23:14.748199   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:14.748206   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:14.748265   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:14.780329   78367 cri.go:89] found id: ""
	I1213 20:23:14.780362   78367 logs.go:282] 0 containers: []
	W1213 20:23:14.780373   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:14.780382   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:14.780440   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:14.813500   78367 cri.go:89] found id: ""
	I1213 20:23:14.813530   78367 logs.go:282] 0 containers: []
	W1213 20:23:14.813542   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:14.813549   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:14.813612   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:14.846394   78367 cri.go:89] found id: ""
	I1213 20:23:14.846425   78367 logs.go:282] 0 containers: []
	W1213 20:23:14.846437   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:14.846449   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:14.846514   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:14.879160   78367 cri.go:89] found id: ""
	I1213 20:23:14.879187   78367 logs.go:282] 0 containers: []
	W1213 20:23:14.879197   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:14.879207   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:14.879222   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:14.929145   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:14.929183   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:14.944506   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:14.944536   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:15.018902   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:15.018930   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:15.018946   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:15.109138   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:15.109175   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:17.649618   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:17.663349   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:17.663419   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:17.695964   78367 cri.go:89] found id: ""
	I1213 20:23:17.695998   78367 logs.go:282] 0 containers: []
	W1213 20:23:17.696016   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:17.696025   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:17.696088   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:17.729608   78367 cri.go:89] found id: ""
	I1213 20:23:17.729631   78367 logs.go:282] 0 containers: []
	W1213 20:23:17.729638   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:17.729644   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:17.729692   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:17.774031   78367 cri.go:89] found id: ""
	I1213 20:23:17.774057   78367 logs.go:282] 0 containers: []
	W1213 20:23:17.774067   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:17.774074   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:17.774136   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:17.814199   78367 cri.go:89] found id: ""
	I1213 20:23:17.814229   78367 logs.go:282] 0 containers: []
	W1213 20:23:17.814239   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:17.814247   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:17.814299   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:17.847931   78367 cri.go:89] found id: ""
	I1213 20:23:17.847964   78367 logs.go:282] 0 containers: []
	W1213 20:23:17.847975   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:17.847983   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:17.848046   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:17.886734   78367 cri.go:89] found id: ""
	I1213 20:23:17.886773   78367 logs.go:282] 0 containers: []
	W1213 20:23:17.886785   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:17.886792   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:17.886878   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:17.940283   78367 cri.go:89] found id: ""
	I1213 20:23:17.940313   78367 logs.go:282] 0 containers: []
	W1213 20:23:17.940325   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:17.940332   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:17.940391   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:17.973189   78367 cri.go:89] found id: ""
	I1213 20:23:17.973213   78367 logs.go:282] 0 containers: []
	W1213 20:23:17.973224   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:17.973234   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:17.973248   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:18.023767   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:18.023795   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:18.036093   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:18.036115   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:18.105854   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:18.105878   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:18.105889   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:18.181670   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:18.181702   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:20.721393   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:20.734715   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:20.734789   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:20.779956   78367 cri.go:89] found id: ""
	I1213 20:23:20.780001   78367 logs.go:282] 0 containers: []
	W1213 20:23:20.780013   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:20.780022   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:20.780092   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:20.817248   78367 cri.go:89] found id: ""
	I1213 20:23:20.817277   78367 logs.go:282] 0 containers: []
	W1213 20:23:20.817293   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:20.817302   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:20.817367   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:20.860370   78367 cri.go:89] found id: ""
	I1213 20:23:20.860395   78367 logs.go:282] 0 containers: []
	W1213 20:23:20.860402   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:20.860408   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:20.860460   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:20.893741   78367 cri.go:89] found id: ""
	I1213 20:23:20.893764   78367 logs.go:282] 0 containers: []
	W1213 20:23:20.893771   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:20.893777   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:20.893824   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:20.926668   78367 cri.go:89] found id: ""
	I1213 20:23:20.926697   78367 logs.go:282] 0 containers: []
	W1213 20:23:20.926708   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:20.926716   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:20.926784   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:20.960429   78367 cri.go:89] found id: ""
	I1213 20:23:20.960461   78367 logs.go:282] 0 containers: []
	W1213 20:23:20.960472   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:20.960481   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:20.960550   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:20.995773   78367 cri.go:89] found id: ""
	I1213 20:23:20.995799   78367 logs.go:282] 0 containers: []
	W1213 20:23:20.995806   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:20.995812   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:20.995869   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:21.027404   78367 cri.go:89] found id: ""
	I1213 20:23:21.027431   78367 logs.go:282] 0 containers: []
	W1213 20:23:21.027439   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:21.027448   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:21.027459   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:21.076902   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:21.076932   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:21.089501   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:21.089526   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:21.158272   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:21.158294   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:21.158305   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:21.238015   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:21.238045   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:23.778515   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:23.790864   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:23.790928   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:23.822195   78367 cri.go:89] found id: ""
	I1213 20:23:23.822223   78367 logs.go:282] 0 containers: []
	W1213 20:23:23.822231   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:23.822237   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:23.822301   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:23.854865   78367 cri.go:89] found id: ""
	I1213 20:23:23.854893   78367 logs.go:282] 0 containers: []
	W1213 20:23:23.854902   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:23.854908   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:23.854956   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:23.894554   78367 cri.go:89] found id: ""
	I1213 20:23:23.894579   78367 logs.go:282] 0 containers: []
	W1213 20:23:23.894587   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:23.894593   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:23.894643   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:23.931463   78367 cri.go:89] found id: ""
	I1213 20:23:23.931490   78367 logs.go:282] 0 containers: []
	W1213 20:23:23.931502   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:23.931510   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:23.931557   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:23.964539   78367 cri.go:89] found id: ""
	I1213 20:23:23.964577   78367 logs.go:282] 0 containers: []
	W1213 20:23:23.964585   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:23.964591   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:23.964651   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:23.998036   78367 cri.go:89] found id: ""
	I1213 20:23:23.998066   78367 logs.go:282] 0 containers: []
	W1213 20:23:23.998077   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:23.998084   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:23.998145   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:24.040876   78367 cri.go:89] found id: ""
	I1213 20:23:24.040913   78367 logs.go:282] 0 containers: []
	W1213 20:23:24.040924   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:24.040932   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:24.040994   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:24.076551   78367 cri.go:89] found id: ""
	I1213 20:23:24.076579   78367 logs.go:282] 0 containers: []
	W1213 20:23:24.076587   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:24.076595   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:24.076605   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:24.130411   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:24.130440   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:24.144090   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:24.144121   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:24.211080   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:24.211101   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:24.211114   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:24.291795   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:24.291834   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:26.830413   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:26.846691   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:26.846748   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:26.883921   78367 cri.go:89] found id: ""
	I1213 20:23:26.883946   78367 logs.go:282] 0 containers: []
	W1213 20:23:26.883957   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:26.883966   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:26.884021   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:26.917411   78367 cri.go:89] found id: ""
	I1213 20:23:26.917442   78367 logs.go:282] 0 containers: []
	W1213 20:23:26.917452   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:26.917460   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:26.917522   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:26.960451   78367 cri.go:89] found id: ""
	I1213 20:23:26.960478   78367 logs.go:282] 0 containers: []
	W1213 20:23:26.960488   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:26.960495   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:26.960554   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:27.005564   78367 cri.go:89] found id: ""
	I1213 20:23:27.005587   78367 logs.go:282] 0 containers: []
	W1213 20:23:27.005596   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:27.005601   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:27.005658   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:27.042556   78367 cri.go:89] found id: ""
	I1213 20:23:27.042588   78367 logs.go:282] 0 containers: []
	W1213 20:23:27.042600   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:27.042607   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:27.042673   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:27.088431   78367 cri.go:89] found id: ""
	I1213 20:23:27.088463   78367 logs.go:282] 0 containers: []
	W1213 20:23:27.088474   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:27.088481   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:27.088536   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:27.134476   78367 cri.go:89] found id: ""
	I1213 20:23:27.134503   78367 logs.go:282] 0 containers: []
	W1213 20:23:27.134510   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:27.134516   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:27.134567   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:27.167291   78367 cri.go:89] found id: ""
	I1213 20:23:27.167325   78367 logs.go:282] 0 containers: []
	W1213 20:23:27.167336   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:27.167345   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:27.167355   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:27.208146   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:27.208183   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:27.268345   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:27.268379   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:27.282890   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:27.282917   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:27.361744   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:27.361768   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:27.361779   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:29.946172   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:29.958416   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:29.958469   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:29.989643   78367 cri.go:89] found id: ""
	I1213 20:23:29.989666   78367 logs.go:282] 0 containers: []
	W1213 20:23:29.989673   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:29.989678   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:29.989722   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:30.020338   78367 cri.go:89] found id: ""
	I1213 20:23:30.020371   78367 logs.go:282] 0 containers: []
	W1213 20:23:30.020381   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:30.020387   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:30.020445   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:30.057189   78367 cri.go:89] found id: ""
	I1213 20:23:30.057218   78367 logs.go:282] 0 containers: []
	W1213 20:23:30.057228   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:30.057234   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:30.057279   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:30.095992   78367 cri.go:89] found id: ""
	I1213 20:23:30.096022   78367 logs.go:282] 0 containers: []
	W1213 20:23:30.096030   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:30.096036   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:30.096081   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:30.147844   78367 cri.go:89] found id: ""
	I1213 20:23:30.147868   78367 logs.go:282] 0 containers: []
	W1213 20:23:30.147875   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:30.147880   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:30.147926   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:30.179824   78367 cri.go:89] found id: ""
	I1213 20:23:30.179853   78367 logs.go:282] 0 containers: []
	W1213 20:23:30.179861   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:30.179866   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:30.179908   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:30.209806   78367 cri.go:89] found id: ""
	I1213 20:23:30.209840   78367 logs.go:282] 0 containers: []
	W1213 20:23:30.209850   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:30.209857   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:30.209898   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:30.241460   78367 cri.go:89] found id: ""
	I1213 20:23:30.241485   78367 logs.go:282] 0 containers: []
	W1213 20:23:30.241492   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:30.241503   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:30.241513   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:30.289088   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:30.289118   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:30.301879   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:30.301902   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:30.364246   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:30.364278   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:30.364302   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:30.442463   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:30.442497   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:32.977785   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:32.990010   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:32.990078   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:33.023334   78367 cri.go:89] found id: ""
	I1213 20:23:33.023365   78367 logs.go:282] 0 containers: []
	W1213 20:23:33.023376   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:33.023384   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:33.023437   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:33.063564   78367 cri.go:89] found id: ""
	I1213 20:23:33.063600   78367 logs.go:282] 0 containers: []
	W1213 20:23:33.063612   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:33.063627   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:33.063690   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:33.098603   78367 cri.go:89] found id: ""
	I1213 20:23:33.098635   78367 logs.go:282] 0 containers: []
	W1213 20:23:33.098646   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:33.098654   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:33.098716   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:33.130401   78367 cri.go:89] found id: ""
	I1213 20:23:33.130425   78367 logs.go:282] 0 containers: []
	W1213 20:23:33.130432   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:33.130438   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:33.130482   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:33.161936   78367 cri.go:89] found id: ""
	I1213 20:23:33.161962   78367 logs.go:282] 0 containers: []
	W1213 20:23:33.161990   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:33.161998   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:33.162069   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:33.193736   78367 cri.go:89] found id: ""
	I1213 20:23:33.193763   78367 logs.go:282] 0 containers: []
	W1213 20:23:33.193770   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:33.193776   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:33.193821   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:33.225488   78367 cri.go:89] found id: ""
	I1213 20:23:33.225516   78367 logs.go:282] 0 containers: []
	W1213 20:23:33.225524   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:33.225531   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:33.225596   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:33.256606   78367 cri.go:89] found id: ""
	I1213 20:23:33.256636   78367 logs.go:282] 0 containers: []
	W1213 20:23:33.256644   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:33.256653   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:33.256663   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:33.304706   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:33.304734   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:33.316742   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:33.316765   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:33.382925   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:33.382949   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:33.382961   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:33.461020   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:33.461054   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:35.996880   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:36.009352   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:36.009410   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:36.044691   78367 cri.go:89] found id: ""
	I1213 20:23:36.044721   78367 logs.go:282] 0 containers: []
	W1213 20:23:36.044733   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:36.044740   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:36.044800   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:36.076155   78367 cri.go:89] found id: ""
	I1213 20:23:36.076185   78367 logs.go:282] 0 containers: []
	W1213 20:23:36.076195   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:36.076204   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:36.076269   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:36.108435   78367 cri.go:89] found id: ""
	I1213 20:23:36.108467   78367 logs.go:282] 0 containers: []
	W1213 20:23:36.108477   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:36.108485   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:36.108550   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:36.143667   78367 cri.go:89] found id: ""
	I1213 20:23:36.143703   78367 logs.go:282] 0 containers: []
	W1213 20:23:36.143714   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:36.143722   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:36.143781   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:36.183909   78367 cri.go:89] found id: ""
	I1213 20:23:36.183937   78367 logs.go:282] 0 containers: []
	W1213 20:23:36.183963   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:36.183971   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:36.184030   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:36.220069   78367 cri.go:89] found id: ""
	I1213 20:23:36.220094   78367 logs.go:282] 0 containers: []
	W1213 20:23:36.220102   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:36.220108   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:36.220163   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:36.257570   78367 cri.go:89] found id: ""
	I1213 20:23:36.257597   78367 logs.go:282] 0 containers: []
	W1213 20:23:36.257607   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:36.257614   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:36.257671   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:36.294294   78367 cri.go:89] found id: ""
	I1213 20:23:36.294336   78367 logs.go:282] 0 containers: []
	W1213 20:23:36.294345   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:36.294356   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:36.294375   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:36.307425   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:36.307457   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:36.369546   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:36.369568   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:36.369581   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:36.443595   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:36.443630   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:36.478559   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:36.478588   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:39.027955   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:39.041250   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:39.041315   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:39.083287   78367 cri.go:89] found id: ""
	I1213 20:23:39.083314   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.083324   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:39.083331   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:39.083384   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:39.125760   78367 cri.go:89] found id: ""
	I1213 20:23:39.125787   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.125798   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:39.125805   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:39.125857   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:39.159459   78367 cri.go:89] found id: ""
	I1213 20:23:39.159487   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.159497   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:39.159504   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:39.159557   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:39.194175   78367 cri.go:89] found id: ""
	I1213 20:23:39.194204   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.194211   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:39.194217   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:39.194265   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:39.228851   78367 cri.go:89] found id: ""
	I1213 20:23:39.228879   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.228889   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:39.228897   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:39.228948   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:39.266408   78367 cri.go:89] found id: ""
	I1213 20:23:39.266441   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.266452   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:39.266460   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:39.266505   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:39.303917   78367 cri.go:89] found id: ""
	I1213 20:23:39.303946   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.303957   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:39.303965   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:39.304024   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:39.337643   78367 cri.go:89] found id: ""
	I1213 20:23:39.337670   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.337680   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:39.337690   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:39.337707   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:39.394343   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:39.394375   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:39.411615   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:39.411645   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:39.484070   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:39.484095   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:39.484110   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:39.570207   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:39.570231   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:42.109283   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:42.126005   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:42.126094   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:42.169463   78367 cri.go:89] found id: ""
	I1213 20:23:42.169494   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.169505   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:42.169512   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:42.169573   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:42.214207   78367 cri.go:89] found id: ""
	I1213 20:23:42.214237   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.214248   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:42.214265   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:42.214327   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:42.255998   78367 cri.go:89] found id: ""
	I1213 20:23:42.256030   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.256041   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:42.256049   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:42.256104   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:42.295578   78367 cri.go:89] found id: ""
	I1213 20:23:42.295607   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.295618   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:42.295625   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:42.295686   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:42.336462   78367 cri.go:89] found id: ""
	I1213 20:23:42.336489   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.336501   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:42.336509   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:42.336568   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:42.377959   78367 cri.go:89] found id: ""
	I1213 20:23:42.377987   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.377998   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:42.378020   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:42.378083   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:42.421761   78367 cri.go:89] found id: ""
	I1213 20:23:42.421790   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.421799   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:42.421807   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:42.421866   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:42.456346   78367 cri.go:89] found id: ""
	I1213 20:23:42.456373   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.456387   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:42.456397   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:42.456411   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:42.472200   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:42.472241   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:42.544913   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:42.544938   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:42.544954   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:42.646820   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:42.646869   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:42.685374   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:42.685411   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:45.244342   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:45.257131   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:45.257210   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:45.291023   78367 cri.go:89] found id: ""
	I1213 20:23:45.291064   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.291072   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:45.291085   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:45.291145   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:45.322469   78367 cri.go:89] found id: ""
	I1213 20:23:45.322499   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.322509   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:45.322516   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:45.322574   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:45.364647   78367 cri.go:89] found id: ""
	I1213 20:23:45.364679   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.364690   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:45.364696   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:45.364754   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:45.406124   78367 cri.go:89] found id: ""
	I1213 20:23:45.406151   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.406161   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:45.406169   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:45.406229   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:45.449418   78367 cri.go:89] found id: ""
	I1213 20:23:45.449442   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.449450   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:45.449456   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:45.449513   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:45.491190   78367 cri.go:89] found id: ""
	I1213 20:23:45.491221   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.491231   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:45.491239   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:45.491312   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:45.537336   78367 cri.go:89] found id: ""
	I1213 20:23:45.537365   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.537375   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:45.537383   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:45.537442   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:45.574826   78367 cri.go:89] found id: ""
	I1213 20:23:45.574873   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.574884   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:45.574897   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:45.574911   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:45.656859   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:45.656900   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:45.671183   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:45.671211   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:45.748645   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:45.748670   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:45.748684   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:45.861549   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:45.861598   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:48.414982   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:48.431396   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:48.431482   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:48.476067   78367 cri.go:89] found id: ""
	I1213 20:23:48.476112   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.476124   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:48.476131   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:48.476194   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:48.517216   78367 cri.go:89] found id: ""
	I1213 20:23:48.517258   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.517269   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:48.517277   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:48.517381   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:48.562993   78367 cri.go:89] found id: ""
	I1213 20:23:48.563092   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.563117   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:48.563135   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:48.563223   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:48.604109   78367 cri.go:89] found id: ""
	I1213 20:23:48.604202   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.604224   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:48.604250   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:48.604348   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:48.651185   78367 cri.go:89] found id: ""
	I1213 20:23:48.651219   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.651230   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:48.651238   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:48.651317   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:48.695266   78367 cri.go:89] found id: ""
	I1213 20:23:48.695305   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.695317   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:48.695325   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:48.695389   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:48.741459   78367 cri.go:89] found id: ""
	I1213 20:23:48.741495   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.741506   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:48.741513   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:48.741573   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:48.785599   78367 cri.go:89] found id: ""
	I1213 20:23:48.785684   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.785701   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:48.785716   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:48.785744   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:48.845741   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:48.845777   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:48.862971   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:48.863013   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:48.934300   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:48.934328   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:48.934344   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:49.023110   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:49.023154   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:51.562149   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:51.580078   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:51.580154   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:51.624644   78367 cri.go:89] found id: ""
	I1213 20:23:51.624677   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.624688   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:51.624696   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:51.624756   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:51.673392   78367 cri.go:89] found id: ""
	I1213 20:23:51.673421   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.673432   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:51.673440   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:51.673501   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:51.721445   78367 cri.go:89] found id: ""
	I1213 20:23:51.721472   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.721480   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:51.721488   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:51.721544   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:51.755079   78367 cri.go:89] found id: ""
	I1213 20:23:51.755112   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.755123   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:51.755131   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:51.755194   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:51.796420   78367 cri.go:89] found id: ""
	I1213 20:23:51.796457   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.796470   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:51.796478   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:51.796542   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:51.830054   78367 cri.go:89] found id: ""
	I1213 20:23:51.830080   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.830090   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:51.830098   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:51.830153   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:51.867546   78367 cri.go:89] found id: ""
	I1213 20:23:51.867574   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.867584   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:51.867592   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:51.867653   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:51.911804   78367 cri.go:89] found id: ""
	I1213 20:23:51.911830   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.911841   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:51.911853   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:51.911867   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:51.981311   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:51.981340   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:51.997948   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:51.997995   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:52.078493   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:52.078526   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:52.078541   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:52.181165   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:52.181213   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:54.728341   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:54.742062   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:54.742122   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:54.779920   78367 cri.go:89] found id: ""
	I1213 20:23:54.779947   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.779958   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:54.779966   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:54.780021   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:54.813600   78367 cri.go:89] found id: ""
	I1213 20:23:54.813631   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.813641   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:54.813649   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:54.813711   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:54.846731   78367 cri.go:89] found id: ""
	I1213 20:23:54.846761   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.846771   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:54.846778   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:54.846837   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:54.878598   78367 cri.go:89] found id: ""
	I1213 20:23:54.878628   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.878638   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:54.878646   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:54.878706   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:54.914259   78367 cri.go:89] found id: ""
	I1213 20:23:54.914293   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.914304   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:54.914318   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:54.914383   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:54.947232   78367 cri.go:89] found id: ""
	I1213 20:23:54.947264   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.947275   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:54.947283   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:54.947350   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:54.992079   78367 cri.go:89] found id: ""
	I1213 20:23:54.992108   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.992118   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:54.992125   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:54.992184   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:55.035067   78367 cri.go:89] found id: ""
	I1213 20:23:55.035093   78367 logs.go:282] 0 containers: []
	W1213 20:23:55.035100   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:55.035109   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:55.035122   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:55.108198   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:55.108224   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:55.108238   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:55.197303   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:55.197333   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:55.248131   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:55.248154   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:55.301605   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:55.301635   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:57.815345   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:57.830459   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:57.830536   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:57.867421   78367 cri.go:89] found id: ""
	I1213 20:23:57.867450   78367 logs.go:282] 0 containers: []
	W1213 20:23:57.867462   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:57.867470   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:57.867528   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:57.904972   78367 cri.go:89] found id: ""
	I1213 20:23:57.905010   78367 logs.go:282] 0 containers: []
	W1213 20:23:57.905021   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:57.905029   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:57.905092   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:57.951889   78367 cri.go:89] found id: ""
	I1213 20:23:57.951916   78367 logs.go:282] 0 containers: []
	W1213 20:23:57.951928   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:57.951936   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:57.952010   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:57.998664   78367 cri.go:89] found id: ""
	I1213 20:23:57.998697   78367 logs.go:282] 0 containers: []
	W1213 20:23:57.998708   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:57.998715   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:57.998772   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:58.047566   78367 cri.go:89] found id: ""
	I1213 20:23:58.047597   78367 logs.go:282] 0 containers: []
	W1213 20:23:58.047608   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:58.047625   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:58.047686   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:58.082590   78367 cri.go:89] found id: ""
	I1213 20:23:58.082619   78367 logs.go:282] 0 containers: []
	W1213 20:23:58.082629   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:58.082637   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:58.082694   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:58.125035   78367 cri.go:89] found id: ""
	I1213 20:23:58.125071   78367 logs.go:282] 0 containers: []
	W1213 20:23:58.125080   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:58.125087   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:58.125147   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:58.168019   78367 cri.go:89] found id: ""
	I1213 20:23:58.168049   78367 logs.go:282] 0 containers: []
	W1213 20:23:58.168060   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:58.168078   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:58.168092   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:58.268179   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:58.268212   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:58.303166   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:58.303192   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:58.393172   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:58.393206   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:58.393220   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:58.489198   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:58.489230   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:01.033661   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:01.047673   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:01.047747   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:01.089498   78367 cri.go:89] found id: ""
	I1213 20:24:01.089526   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.089536   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:01.089543   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:01.089605   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:01.130215   78367 cri.go:89] found id: ""
	I1213 20:24:01.130245   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.130256   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:01.130264   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:01.130326   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:01.177064   78367 cri.go:89] found id: ""
	I1213 20:24:01.177102   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.177119   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:01.177126   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:01.177187   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:01.231277   78367 cri.go:89] found id: ""
	I1213 20:24:01.231312   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.231324   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:01.231332   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:01.231395   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:01.277419   78367 cri.go:89] found id: ""
	I1213 20:24:01.277446   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.277456   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:01.277463   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:01.277519   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:01.322970   78367 cri.go:89] found id: ""
	I1213 20:24:01.322996   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.323007   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:01.323017   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:01.323087   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:01.369554   78367 cri.go:89] found id: ""
	I1213 20:24:01.369585   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.369596   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:01.369603   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:01.369661   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:01.411927   78367 cri.go:89] found id: ""
	I1213 20:24:01.411957   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.411967   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:01.411987   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:01.412005   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:01.486061   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:01.486097   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:01.500644   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:01.500673   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:01.578266   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:01.578283   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:01.578293   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:01.687325   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:01.687362   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:04.239043   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:04.252218   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:04.252292   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:04.294778   78367 cri.go:89] found id: ""
	I1213 20:24:04.294810   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.294820   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:04.294828   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:04.294910   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:04.339012   78367 cri.go:89] found id: ""
	I1213 20:24:04.339049   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.339061   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:04.339069   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:04.339134   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:04.391028   78367 cri.go:89] found id: ""
	I1213 20:24:04.391064   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.391076   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:04.391084   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:04.391147   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:04.436260   78367 cri.go:89] found id: ""
	I1213 20:24:04.436291   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.436308   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:04.436316   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:04.436372   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:04.485225   78367 cri.go:89] found id: ""
	I1213 20:24:04.485255   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.485274   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:04.485283   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:04.485347   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:04.527198   78367 cri.go:89] found id: ""
	I1213 20:24:04.527228   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.527239   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:04.527247   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:04.527306   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:04.567885   78367 cri.go:89] found id: ""
	I1213 20:24:04.567915   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.567926   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:04.567934   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:04.567984   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:04.608495   78367 cri.go:89] found id: ""
	I1213 20:24:04.608535   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.608546   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:04.608557   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:04.608571   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:04.691701   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:04.691735   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:04.739203   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:04.739236   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:04.815994   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:04.816050   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:04.851237   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:04.851277   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:04.994736   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:07.495945   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:07.509565   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:07.509640   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:07.548332   78367 cri.go:89] found id: ""
	I1213 20:24:07.548357   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.548365   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:07.548371   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:07.548417   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:07.585718   78367 cri.go:89] found id: ""
	I1213 20:24:07.585745   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.585752   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:07.585758   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:07.585816   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:07.620441   78367 cri.go:89] found id: ""
	I1213 20:24:07.620470   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.620478   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:07.620485   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:07.620543   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:07.654638   78367 cri.go:89] found id: ""
	I1213 20:24:07.654671   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.654682   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:07.654690   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:07.654752   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:07.690251   78367 cri.go:89] found id: ""
	I1213 20:24:07.690279   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.690289   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:07.690296   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:07.690362   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:07.733229   78367 cri.go:89] found id: ""
	I1213 20:24:07.733260   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.733268   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:07.733274   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:07.733325   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:07.767187   78367 cri.go:89] found id: ""
	I1213 20:24:07.767218   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.767229   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:07.767237   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:07.767309   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:07.803454   78367 cri.go:89] found id: ""
	I1213 20:24:07.803477   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.803485   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:07.803493   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:07.803504   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:07.884578   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:07.884602   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:07.884616   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:07.966402   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:07.966448   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:08.010335   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:08.010368   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:08.064614   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:08.064647   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:10.580540   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:10.597959   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:10.598030   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:10.667638   78367 cri.go:89] found id: ""
	I1213 20:24:10.667665   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.667675   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:10.667683   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:10.667739   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:10.728894   78367 cri.go:89] found id: ""
	I1213 20:24:10.728918   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.728929   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:10.728936   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:10.728992   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:10.771954   78367 cri.go:89] found id: ""
	I1213 20:24:10.771991   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.772001   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:10.772009   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:10.772067   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:10.818154   78367 cri.go:89] found id: ""
	I1213 20:24:10.818181   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.818188   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:10.818193   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:10.818240   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:10.858974   78367 cri.go:89] found id: ""
	I1213 20:24:10.859003   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.859014   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:10.859021   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:10.859086   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:10.908481   78367 cri.go:89] found id: ""
	I1213 20:24:10.908511   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.908524   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:10.908532   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:10.908604   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:10.944951   78367 cri.go:89] found id: ""
	I1213 20:24:10.944979   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.944987   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:10.945001   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:10.945064   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:10.979563   78367 cri.go:89] found id: ""
	I1213 20:24:10.979588   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.979596   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:10.979604   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:10.979616   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:11.052472   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:11.052507   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:11.068916   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:11.068947   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:11.146800   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:11.146826   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:11.146839   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:11.248307   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:11.248347   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:13.794975   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:13.809490   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:13.809563   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:13.845247   78367 cri.go:89] found id: ""
	I1213 20:24:13.845312   78367 logs.go:282] 0 containers: []
	W1213 20:24:13.845326   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:13.845337   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:13.845404   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:13.891111   78367 cri.go:89] found id: ""
	I1213 20:24:13.891155   78367 logs.go:282] 0 containers: []
	W1213 20:24:13.891167   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:13.891174   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:13.891225   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:13.944404   78367 cri.go:89] found id: ""
	I1213 20:24:13.944423   78367 logs.go:282] 0 containers: []
	W1213 20:24:13.944431   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:13.944438   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:13.944479   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:13.982745   78367 cri.go:89] found id: ""
	I1213 20:24:13.982766   78367 logs.go:282] 0 containers: []
	W1213 20:24:13.982773   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:13.982779   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:13.982823   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:14.018505   78367 cri.go:89] found id: ""
	I1213 20:24:14.018537   78367 logs.go:282] 0 containers: []
	W1213 20:24:14.018547   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:14.018555   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:14.018622   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:14.053196   78367 cri.go:89] found id: ""
	I1213 20:24:14.053222   78367 logs.go:282] 0 containers: []
	W1213 20:24:14.053233   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:14.053241   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:14.053305   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:14.085486   78367 cri.go:89] found id: ""
	I1213 20:24:14.085516   78367 logs.go:282] 0 containers: []
	W1213 20:24:14.085526   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:14.085534   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:14.085600   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:14.123930   78367 cri.go:89] found id: ""
	I1213 20:24:14.123958   78367 logs.go:282] 0 containers: []
	W1213 20:24:14.123968   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:14.123979   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:14.123993   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:14.184665   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:14.184705   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:14.207707   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:14.207742   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:14.317989   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:14.318017   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:14.318037   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:14.440228   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:14.440275   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:16.992002   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:17.010798   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:17.010887   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:17.054515   78367 cri.go:89] found id: ""
	I1213 20:24:17.054539   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.054548   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:17.054557   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:17.054608   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:17.106222   78367 cri.go:89] found id: ""
	I1213 20:24:17.106258   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.106269   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:17.106276   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:17.106328   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:17.145680   78367 cri.go:89] found id: ""
	I1213 20:24:17.145706   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.145713   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:17.145719   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:17.145772   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:17.183345   78367 cri.go:89] found id: ""
	I1213 20:24:17.183372   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.183383   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:17.183391   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:17.183440   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:17.218181   78367 cri.go:89] found id: ""
	I1213 20:24:17.218214   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.218226   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:17.218233   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:17.218308   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:17.260697   78367 cri.go:89] found id: ""
	I1213 20:24:17.260736   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.260747   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:17.260756   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:17.260815   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:17.296356   78367 cri.go:89] found id: ""
	I1213 20:24:17.296383   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.296394   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:17.296402   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:17.296452   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:17.332909   78367 cri.go:89] found id: ""
	I1213 20:24:17.332936   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.332946   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:17.332956   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:17.332979   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:17.400328   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:17.400361   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:17.419802   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:17.419836   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:17.508687   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:17.508709   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:17.508724   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:17.594401   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:17.594433   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:20.132881   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:20.151309   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:20.151382   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:20.185818   78367 cri.go:89] found id: ""
	I1213 20:24:20.185845   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.185854   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:20.185862   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:20.185913   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:20.227855   78367 cri.go:89] found id: ""
	I1213 20:24:20.227885   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.227895   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:20.227902   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:20.227957   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:20.265126   78367 cri.go:89] found id: ""
	I1213 20:24:20.265149   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.265158   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:20.265165   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:20.265215   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:20.303082   78367 cri.go:89] found id: ""
	I1213 20:24:20.303100   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.303106   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:20.303112   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:20.303148   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:20.334523   78367 cri.go:89] found id: ""
	I1213 20:24:20.334554   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.334565   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:20.334573   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:20.334634   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:20.367872   78367 cri.go:89] found id: ""
	I1213 20:24:20.367904   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.367915   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:20.367922   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:20.367972   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:20.401025   78367 cri.go:89] found id: ""
	I1213 20:24:20.401053   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.401063   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:20.401071   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:20.401118   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:20.437198   78367 cri.go:89] found id: ""
	I1213 20:24:20.437224   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.437232   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:20.437240   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:20.437252   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:20.491638   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:20.491670   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:20.507146   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:20.507176   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:20.586662   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:20.586708   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:20.586725   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:20.677650   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:20.677702   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:23.226457   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:23.240139   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:23.240197   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:23.276469   78367 cri.go:89] found id: ""
	I1213 20:24:23.276503   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.276514   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:23.276522   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:23.276576   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:23.321764   78367 cri.go:89] found id: ""
	I1213 20:24:23.321793   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.321804   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:23.321811   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:23.321860   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:23.355263   78367 cri.go:89] found id: ""
	I1213 20:24:23.355297   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.355308   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:23.355315   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:23.355368   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:23.396846   78367 cri.go:89] found id: ""
	I1213 20:24:23.396875   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.396885   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:23.396894   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:23.396955   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:23.435540   78367 cri.go:89] found id: ""
	I1213 20:24:23.435567   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.435578   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:23.435586   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:23.435634   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:23.473920   78367 cri.go:89] found id: ""
	I1213 20:24:23.473944   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.473959   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:23.473967   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:23.474023   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:23.507136   78367 cri.go:89] found id: ""
	I1213 20:24:23.507168   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.507177   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:23.507183   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:23.507239   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:23.539050   78367 cri.go:89] found id: ""
	I1213 20:24:23.539075   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.539083   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:23.539091   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:23.539104   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:23.553000   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:23.553026   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:23.619106   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:23.619128   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:23.619143   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:23.704028   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:23.704065   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:23.740575   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:23.740599   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:26.290469   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:26.303070   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:26.303114   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:26.333881   78367 cri.go:89] found id: ""
	I1213 20:24:26.333902   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.333909   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:26.333915   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:26.333957   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:26.367218   78367 cri.go:89] found id: ""
	I1213 20:24:26.367246   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.367253   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:26.367258   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:26.367314   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:26.397281   78367 cri.go:89] found id: ""
	I1213 20:24:26.397313   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.397325   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:26.397332   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:26.397388   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:26.429238   78367 cri.go:89] found id: ""
	I1213 20:24:26.429260   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.429270   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:26.429290   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:26.429335   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:26.457723   78367 cri.go:89] found id: ""
	I1213 20:24:26.457751   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.457760   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:26.457765   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:26.457820   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:26.487066   78367 cri.go:89] found id: ""
	I1213 20:24:26.487086   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.487093   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:26.487098   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:26.487153   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:26.517336   78367 cri.go:89] found id: ""
	I1213 20:24:26.517360   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.517367   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:26.517373   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:26.517428   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:26.547918   78367 cri.go:89] found id: ""
	I1213 20:24:26.547940   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.547947   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:26.547955   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:26.547966   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:26.614500   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:26.614527   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:26.614541   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:26.688954   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:26.688983   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:26.723430   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:26.723453   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:26.771679   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:26.771707   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:29.284113   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:29.296309   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:29.296365   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:29.335369   78367 cri.go:89] found id: ""
	I1213 20:24:29.335395   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.335404   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:29.335411   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:29.335477   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:29.364958   78367 cri.go:89] found id: ""
	I1213 20:24:29.364996   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.365005   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:29.365011   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:29.365056   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:29.395763   78367 cri.go:89] found id: ""
	I1213 20:24:29.395785   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.395792   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:29.395798   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:29.395847   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:29.426100   78367 cri.go:89] found id: ""
	I1213 20:24:29.426131   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.426141   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:29.426148   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:29.426212   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:29.454982   78367 cri.go:89] found id: ""
	I1213 20:24:29.455011   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.455018   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:29.455025   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:29.455086   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:29.490059   78367 cri.go:89] found id: ""
	I1213 20:24:29.490088   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.490098   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:29.490105   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:29.490164   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:29.523139   78367 cri.go:89] found id: ""
	I1213 20:24:29.523170   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.523179   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:29.523184   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:29.523235   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:29.553382   78367 cri.go:89] found id: ""
	I1213 20:24:29.553411   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.553422   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:29.553432   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:29.553445   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:29.603370   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:29.603399   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:29.615270   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:29.615296   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:29.676210   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:29.676241   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:29.676256   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:29.748591   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:29.748620   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:32.283657   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:32.295699   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:32.295770   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:32.326072   78367 cri.go:89] found id: ""
	I1213 20:24:32.326100   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.326109   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:32.326116   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:32.326174   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:32.359219   78367 cri.go:89] found id: ""
	I1213 20:24:32.359267   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.359279   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:32.359287   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:32.359374   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:32.389664   78367 cri.go:89] found id: ""
	I1213 20:24:32.389687   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.389694   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:32.389700   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:32.389756   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:32.419871   78367 cri.go:89] found id: ""
	I1213 20:24:32.419893   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.419899   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:32.419904   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:32.419955   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:32.449254   78367 cri.go:89] found id: ""
	I1213 20:24:32.449282   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.449292   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:32.449300   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:32.449359   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:32.477857   78367 cri.go:89] found id: ""
	I1213 20:24:32.477887   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.477897   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:32.477905   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:32.477965   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:32.507395   78367 cri.go:89] found id: ""
	I1213 20:24:32.507420   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.507429   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:32.507437   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:32.507493   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:32.536846   78367 cri.go:89] found id: ""
	I1213 20:24:32.536882   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.536894   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:32.536904   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:32.536918   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:32.586510   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:32.586540   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:32.598914   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:32.598941   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:32.661653   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:32.661673   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:32.661686   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:32.738149   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:32.738180   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:35.274525   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:35.287259   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:35.287338   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:35.321233   78367 cri.go:89] found id: ""
	I1213 20:24:35.321269   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.321280   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:35.321287   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:35.321350   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:35.351512   78367 cri.go:89] found id: ""
	I1213 20:24:35.351535   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.351543   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:35.351549   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:35.351607   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:35.380770   78367 cri.go:89] found id: ""
	I1213 20:24:35.380795   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.380805   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:35.380812   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:35.380868   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:35.410311   78367 cri.go:89] found id: ""
	I1213 20:24:35.410339   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.410348   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:35.410356   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:35.410410   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:35.437955   78367 cri.go:89] found id: ""
	I1213 20:24:35.437979   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.437987   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:35.437992   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:35.438039   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:35.467621   78367 cri.go:89] found id: ""
	I1213 20:24:35.467646   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.467657   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:35.467665   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:35.467729   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:35.496779   78367 cri.go:89] found id: ""
	I1213 20:24:35.496801   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.496809   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:35.496814   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:35.496867   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:35.527107   78367 cri.go:89] found id: ""
	I1213 20:24:35.527140   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.527148   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:35.527157   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:35.527167   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:35.573444   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:35.573472   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:35.586107   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:35.586129   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:35.647226   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:35.647249   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:35.647261   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:35.721264   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:35.721297   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:38.256983   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:38.269600   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:38.269665   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:38.304526   78367 cri.go:89] found id: ""
	I1213 20:24:38.304552   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.304559   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:38.304566   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:38.304621   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:38.334858   78367 cri.go:89] found id: ""
	I1213 20:24:38.334885   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.334896   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:38.334902   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:38.334959   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:38.364281   78367 cri.go:89] found id: ""
	I1213 20:24:38.364305   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.364312   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:38.364318   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:38.364364   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:38.393853   78367 cri.go:89] found id: ""
	I1213 20:24:38.393878   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.393886   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:38.393892   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:38.393936   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:38.424196   78367 cri.go:89] found id: ""
	I1213 20:24:38.424225   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.424234   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:38.424241   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:38.424305   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:38.454285   78367 cri.go:89] found id: ""
	I1213 20:24:38.454311   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.454322   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:38.454330   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:38.454382   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:38.483158   78367 cri.go:89] found id: ""
	I1213 20:24:38.483187   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.483194   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:38.483199   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:38.483250   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:38.512116   78367 cri.go:89] found id: ""
	I1213 20:24:38.512149   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.512161   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:38.512172   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:38.512186   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:38.587026   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:38.587053   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:38.587069   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:38.661024   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:38.661055   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:38.695893   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:38.695922   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:38.746253   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:38.746282   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:41.258578   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:41.271632   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:41.271691   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:41.303047   78367 cri.go:89] found id: ""
	I1213 20:24:41.303073   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.303081   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:41.303087   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:41.303149   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:41.334605   78367 cri.go:89] found id: ""
	I1213 20:24:41.334642   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.334653   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:41.334662   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:41.334714   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:41.367617   78367 cri.go:89] found id: ""
	I1213 20:24:41.367650   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.367661   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:41.367670   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:41.367724   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:41.399772   78367 cri.go:89] found id: ""
	I1213 20:24:41.399800   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.399811   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:41.399819   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:41.399880   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:41.431833   78367 cri.go:89] found id: ""
	I1213 20:24:41.431869   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.431879   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:41.431887   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:41.431948   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:41.462640   78367 cri.go:89] found id: ""
	I1213 20:24:41.462669   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.462679   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:41.462688   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:41.462757   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:41.492716   78367 cri.go:89] found id: ""
	I1213 20:24:41.492748   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.492758   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:41.492764   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:41.492823   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:41.527697   78367 cri.go:89] found id: ""
	I1213 20:24:41.527729   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.527739   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:41.527750   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:41.527763   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:41.540507   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:41.540530   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:41.602837   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:41.602873   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:41.602888   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:41.676818   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:41.676855   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:41.713699   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:41.713731   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:44.263397   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:44.275396   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:44.275463   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:44.306065   78367 cri.go:89] found id: ""
	I1213 20:24:44.306095   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.306106   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:44.306114   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:44.306170   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:44.336701   78367 cri.go:89] found id: ""
	I1213 20:24:44.336734   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.336746   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:44.336754   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:44.336803   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:44.367523   78367 cri.go:89] found id: ""
	I1213 20:24:44.367553   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.367564   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:44.367571   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:44.367626   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:44.397934   78367 cri.go:89] found id: ""
	I1213 20:24:44.397960   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.397970   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:44.397978   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:44.398043   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:44.428770   78367 cri.go:89] found id: ""
	I1213 20:24:44.428799   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.428810   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:44.428817   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:44.428874   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:44.459961   78367 cri.go:89] found id: ""
	I1213 20:24:44.459999   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.460011   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:44.460018   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:44.460068   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:44.491377   78367 cri.go:89] found id: ""
	I1213 20:24:44.491407   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.491419   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:44.491426   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:44.491488   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:44.521764   78367 cri.go:89] found id: ""
	I1213 20:24:44.521798   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.521808   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:44.521819   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:44.521835   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:44.584292   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:44.584316   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:44.584328   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:44.654841   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:44.654880   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:44.689572   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:44.689598   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:44.738234   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:44.738265   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:47.250759   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:47.262717   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:47.262786   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:47.291884   78367 cri.go:89] found id: ""
	I1213 20:24:47.291910   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.291917   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:47.291923   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:47.291968   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:47.322010   78367 cri.go:89] found id: ""
	I1213 20:24:47.322036   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.322047   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:47.322056   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:47.322114   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:47.352441   78367 cri.go:89] found id: ""
	I1213 20:24:47.352470   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.352478   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:47.352483   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:47.352535   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:47.382622   78367 cri.go:89] found id: ""
	I1213 20:24:47.382646   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.382653   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:47.382659   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:47.382709   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:47.413127   78367 cri.go:89] found id: ""
	I1213 20:24:47.413149   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.413156   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:47.413161   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:47.413212   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:47.445397   78367 cri.go:89] found id: ""
	I1213 20:24:47.445423   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.445430   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:47.445435   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:47.445483   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:47.475871   78367 cri.go:89] found id: ""
	I1213 20:24:47.475897   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.475904   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:47.475910   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:47.475966   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:47.505357   78367 cri.go:89] found id: ""
	I1213 20:24:47.505382   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.505389   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:47.505397   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:47.505407   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:47.568960   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:47.568982   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:47.569010   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:47.646228   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:47.646262   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:47.679590   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:47.679616   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:47.726854   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:47.726884   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:50.239188   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:50.251010   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:50.251061   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:50.281168   78367 cri.go:89] found id: ""
	I1213 20:24:50.281194   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.281204   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:50.281211   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:50.281277   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:50.310396   78367 cri.go:89] found id: ""
	I1213 20:24:50.310421   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.310431   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:50.310438   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:50.310491   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:50.340824   78367 cri.go:89] found id: ""
	I1213 20:24:50.340856   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.340866   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:50.340873   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:50.340937   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:50.377401   78367 cri.go:89] found id: ""
	I1213 20:24:50.377430   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.377437   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:50.377443   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:50.377500   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:50.406521   78367 cri.go:89] found id: ""
	I1213 20:24:50.406552   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.406562   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:50.406567   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:50.406632   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:50.440070   78367 cri.go:89] found id: ""
	I1213 20:24:50.440101   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.440112   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:50.440118   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:50.440168   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:50.473103   78367 cri.go:89] found id: ""
	I1213 20:24:50.473134   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.473145   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:50.473152   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:50.473218   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:50.503787   78367 cri.go:89] found id: ""
	I1213 20:24:50.503815   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.503824   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:50.503832   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:50.503842   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:50.551379   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:50.551407   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:50.563705   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:50.563732   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:50.625016   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:50.625046   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:50.625062   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:50.717566   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:50.717601   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:53.254296   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:53.266940   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:53.266995   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:53.302975   78367 cri.go:89] found id: ""
	I1213 20:24:53.303000   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.303008   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:53.303013   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:53.303080   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:53.338434   78367 cri.go:89] found id: ""
	I1213 20:24:53.338461   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.338469   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:53.338474   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:53.338526   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:53.375117   78367 cri.go:89] found id: ""
	I1213 20:24:53.375146   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.375156   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:53.375164   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:53.375221   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:53.413376   78367 cri.go:89] found id: ""
	I1213 20:24:53.413406   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.413416   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:53.413423   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:53.413482   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:53.447697   78367 cri.go:89] found id: ""
	I1213 20:24:53.447725   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.447736   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:53.447743   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:53.447802   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:53.480987   78367 cri.go:89] found id: ""
	I1213 20:24:53.481019   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.481037   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:53.481045   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:53.481149   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:53.516573   78367 cri.go:89] found id: ""
	I1213 20:24:53.516602   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.516611   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:53.516617   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:53.516664   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:53.552098   78367 cri.go:89] found id: ""
	I1213 20:24:53.552128   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.552144   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:53.552155   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:53.552168   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:53.632362   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:53.632393   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:53.667030   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:53.667061   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:53.716328   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:53.716355   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:53.730194   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:53.730219   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:53.804612   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:56.305032   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:56.317875   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:56.317934   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:56.353004   78367 cri.go:89] found id: ""
	I1213 20:24:56.353027   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.353035   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:56.353040   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:56.353086   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:56.398694   78367 cri.go:89] found id: ""
	I1213 20:24:56.398722   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.398731   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:56.398739   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:56.398800   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:56.430481   78367 cri.go:89] found id: ""
	I1213 20:24:56.430512   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.430523   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:56.430530   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:56.430589   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:56.460467   78367 cri.go:89] found id: ""
	I1213 20:24:56.460501   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.460512   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:56.460520   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:56.460583   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:56.490776   78367 cri.go:89] found id: ""
	I1213 20:24:56.490804   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.490814   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:56.490822   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:56.490889   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:56.520440   78367 cri.go:89] found id: ""
	I1213 20:24:56.520466   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.520473   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:56.520478   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:56.520525   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:56.550233   78367 cri.go:89] found id: ""
	I1213 20:24:56.550258   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.550266   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:56.550271   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:56.550347   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:56.580651   78367 cri.go:89] found id: ""
	I1213 20:24:56.580681   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.580692   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:56.580703   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:56.580716   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:56.650811   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:56.650839   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:56.650892   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:56.728061   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:56.728089   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:56.767782   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:56.767809   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:56.818747   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:56.818781   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:59.331474   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:59.344319   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:59.344379   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:59.373901   78367 cri.go:89] found id: ""
	I1213 20:24:59.373931   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.373941   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:59.373947   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:59.373999   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:59.405800   78367 cri.go:89] found id: ""
	I1213 20:24:59.405832   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.405844   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:59.405851   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:59.405922   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:59.435487   78367 cri.go:89] found id: ""
	I1213 20:24:59.435517   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.435527   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:59.435535   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:59.435587   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:59.466466   78367 cri.go:89] found id: ""
	I1213 20:24:59.466489   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.466497   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:59.466502   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:59.466543   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:59.500301   78367 cri.go:89] found id: ""
	I1213 20:24:59.500330   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.500337   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:59.500342   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:59.500387   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:59.532614   78367 cri.go:89] found id: ""
	I1213 20:24:59.532642   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.532651   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:59.532658   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:59.532717   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:59.562990   78367 cri.go:89] found id: ""
	I1213 20:24:59.563013   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.563020   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:59.563034   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:59.563078   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:59.593335   78367 cri.go:89] found id: ""
	I1213 20:24:59.593366   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.593376   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:59.593386   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:59.593401   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:59.659058   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:59.659083   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:59.659097   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:59.733569   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:59.733600   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:59.770151   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:59.770178   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:59.820506   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:59.820534   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:02.334083   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:02.346559   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:02.346714   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:02.380346   78367 cri.go:89] found id: ""
	I1213 20:25:02.380376   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.380384   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:02.380390   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:02.380441   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:02.412347   78367 cri.go:89] found id: ""
	I1213 20:25:02.412374   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.412385   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:02.412392   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:02.412453   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:02.443408   78367 cri.go:89] found id: ""
	I1213 20:25:02.443441   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.443453   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:02.443461   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:02.443514   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:02.474165   78367 cri.go:89] found id: ""
	I1213 20:25:02.474193   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.474201   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:02.474206   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:02.474272   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:02.505076   78367 cri.go:89] found id: ""
	I1213 20:25:02.505109   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.505121   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:02.505129   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:02.505186   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:02.541145   78367 cri.go:89] found id: ""
	I1213 20:25:02.541174   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.541182   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:02.541187   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:02.541236   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:02.579150   78367 cri.go:89] found id: ""
	I1213 20:25:02.579183   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.579194   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:02.579201   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:02.579262   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:02.611542   78367 cri.go:89] found id: ""
	I1213 20:25:02.611582   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.611594   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:02.611607   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:02.611620   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:02.661145   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:02.661183   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:02.673918   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:02.673944   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:02.745321   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:02.745345   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:02.745358   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:02.820953   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:02.820992   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:05.373838   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:05.386758   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:05.386833   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:05.419177   78367 cri.go:89] found id: ""
	I1213 20:25:05.419205   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.419215   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:05.419223   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:05.419292   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:05.450595   78367 cri.go:89] found id: ""
	I1213 20:25:05.450628   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.450639   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:05.450648   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:05.450707   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:05.481818   78367 cri.go:89] found id: ""
	I1213 20:25:05.481844   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.481852   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:05.481857   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:05.481902   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:05.517195   78367 cri.go:89] found id: ""
	I1213 20:25:05.517230   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.517239   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:05.517246   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:05.517302   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:05.548698   78367 cri.go:89] found id: ""
	I1213 20:25:05.548733   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.548744   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:05.548753   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:05.548811   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:05.579983   78367 cri.go:89] found id: ""
	I1213 20:25:05.580009   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.580015   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:05.580022   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:05.580070   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:05.610660   78367 cri.go:89] found id: ""
	I1213 20:25:05.610685   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.610693   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:05.610699   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:05.610750   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:05.641572   78367 cri.go:89] found id: ""
	I1213 20:25:05.641598   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.641605   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:05.641614   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:05.641625   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:05.712243   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:05.712264   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:05.712275   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:05.793232   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:05.793271   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:05.827863   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:05.827901   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:05.877641   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:05.877671   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:08.390425   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:08.402888   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:08.402944   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:08.436903   78367 cri.go:89] found id: ""
	I1213 20:25:08.436931   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.436941   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:08.436948   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:08.437005   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:08.469526   78367 cri.go:89] found id: ""
	I1213 20:25:08.469561   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.469574   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:08.469581   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:08.469644   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:08.500136   78367 cri.go:89] found id: ""
	I1213 20:25:08.500165   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.500172   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:08.500178   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:08.500223   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:08.537556   78367 cri.go:89] found id: ""
	I1213 20:25:08.537591   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.537603   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:08.537611   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:08.537669   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:08.577468   78367 cri.go:89] found id: ""
	I1213 20:25:08.577492   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.577501   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:08.577509   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:08.577566   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:08.632075   78367 cri.go:89] found id: ""
	I1213 20:25:08.632103   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.632113   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:08.632120   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:08.632178   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:08.671119   78367 cri.go:89] found id: ""
	I1213 20:25:08.671148   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.671158   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:08.671166   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:08.671225   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:08.700873   78367 cri.go:89] found id: ""
	I1213 20:25:08.700900   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.700908   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:08.700916   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:08.700927   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:08.713084   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:08.713107   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:08.780299   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:08.780331   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:08.780346   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:08.851830   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:08.851865   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:08.886834   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:08.886883   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:11.435256   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:11.447096   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:11.447155   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:11.477376   78367 cri.go:89] found id: ""
	I1213 20:25:11.477403   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.477411   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:11.477416   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:11.477460   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:11.507532   78367 cri.go:89] found id: ""
	I1213 20:25:11.507564   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.507572   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:11.507582   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:11.507628   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:11.537352   78367 cri.go:89] found id: ""
	I1213 20:25:11.537383   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.537393   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:11.537400   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:11.537450   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:11.567653   78367 cri.go:89] found id: ""
	I1213 20:25:11.567681   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.567693   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:11.567700   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:11.567756   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:11.597752   78367 cri.go:89] found id: ""
	I1213 20:25:11.597782   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.597790   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:11.597795   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:11.597840   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:11.626231   78367 cri.go:89] found id: ""
	I1213 20:25:11.626258   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.626269   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:11.626276   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:11.626334   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:11.655694   78367 cri.go:89] found id: ""
	I1213 20:25:11.655724   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.655733   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:11.655740   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:11.655794   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:11.685714   78367 cri.go:89] found id: ""
	I1213 20:25:11.685742   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.685750   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:11.685758   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:11.685768   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:11.733749   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:11.733774   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:11.746307   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:11.746330   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:11.807168   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:11.807190   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:11.807202   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:11.878490   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:11.878522   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:14.416516   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:14.428258   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:14.428339   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:14.458229   78367 cri.go:89] found id: ""
	I1213 20:25:14.458255   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.458263   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:14.458272   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:14.458326   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:14.488061   78367 cri.go:89] found id: ""
	I1213 20:25:14.488101   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.488109   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:14.488114   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:14.488159   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:14.516854   78367 cri.go:89] found id: ""
	I1213 20:25:14.516880   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.516888   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:14.516893   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:14.516953   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:14.549881   78367 cri.go:89] found id: ""
	I1213 20:25:14.549908   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.549919   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:14.549925   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:14.549982   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:14.579410   78367 cri.go:89] found id: ""
	I1213 20:25:14.579439   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.579449   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:14.579457   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:14.579507   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:14.609126   78367 cri.go:89] found id: ""
	I1213 20:25:14.609155   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.609163   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:14.609169   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:14.609216   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:14.638655   78367 cri.go:89] found id: ""
	I1213 20:25:14.638682   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.638689   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:14.638694   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:14.638739   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:14.667950   78367 cri.go:89] found id: ""
	I1213 20:25:14.667977   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.667986   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:14.667997   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:14.668011   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:14.705223   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:14.705250   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:14.753645   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:14.753671   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:14.766082   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:14.766106   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:14.826802   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:14.826829   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:14.826841   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:17.400518   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:17.412464   78367 kubeadm.go:597] duration metric: took 4m2.435244002s to restartPrimaryControlPlane
	W1213 20:25:17.412536   78367 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 20:25:17.412564   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 20:25:19.422149   78367 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.009561199s)
	I1213 20:25:19.422215   78367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:25:19.435431   78367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 20:25:19.444465   78367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:25:19.452996   78367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:25:19.453011   78367 kubeadm.go:157] found existing configuration files:
	
	I1213 20:25:19.453051   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:25:19.461055   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:25:19.461096   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:25:19.469525   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:25:19.477399   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:25:19.477442   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:25:19.485719   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:25:19.493837   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:25:19.493895   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:25:19.502493   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:25:19.510479   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:25:19.510525   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:25:19.518746   78367 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 20:25:19.585664   78367 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1213 20:25:19.585781   78367 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 20:25:19.709117   78367 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 20:25:19.709242   78367 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 20:25:19.709362   78367 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 20:25:19.865449   78367 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 20:25:19.867503   78367 out.go:235]   - Generating certificates and keys ...
	I1213 20:25:19.867605   78367 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 20:25:19.867668   78367 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 20:25:19.867759   78367 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 20:25:19.867864   78367 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1213 20:25:19.867978   78367 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 20:25:19.868062   78367 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1213 20:25:19.868159   78367 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1213 20:25:19.868251   78367 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1213 20:25:19.868515   78367 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 20:25:19.868889   78367 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 20:25:19.869062   78367 kubeadm.go:310] [certs] Using the existing "sa" key
	I1213 20:25:19.869157   78367 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 20:25:19.955108   78367 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 20:25:20.380950   78367 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 20:25:20.496704   78367 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 20:25:20.598530   78367 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 20:25:20.612045   78367 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 20:25:20.613742   78367 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 20:25:20.613809   78367 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 20:25:20.733629   78367 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 20:25:20.735476   78367 out.go:235]   - Booting up control plane ...
	I1213 20:25:20.735586   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 20:25:20.739585   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 20:25:20.740414   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 20:25:20.741056   78367 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 20:25:20.743491   78367 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 20:26:00.744556   78367 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1213 20:26:00.745298   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:26:00.745523   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:26:05.746023   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:26:05.746244   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:26:15.746586   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:26:15.746767   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:26:35.747606   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:26:35.747803   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:27:15.749327   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:27:15.749616   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:27:15.749642   78367 kubeadm.go:310] 
	I1213 20:27:15.749705   78367 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1213 20:27:15.749763   78367 kubeadm.go:310] 		timed out waiting for the condition
	I1213 20:27:15.749771   78367 kubeadm.go:310] 
	I1213 20:27:15.749801   78367 kubeadm.go:310] 	This error is likely caused by:
	I1213 20:27:15.749858   78367 kubeadm.go:310] 		- The kubelet is not running
	I1213 20:27:15.749970   78367 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 20:27:15.749978   78367 kubeadm.go:310] 
	I1213 20:27:15.750116   78367 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 20:27:15.750147   78367 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1213 20:27:15.750175   78367 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1213 20:27:15.750182   78367 kubeadm.go:310] 
	I1213 20:27:15.750323   78367 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1213 20:27:15.750445   78367 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1213 20:27:15.750469   78367 kubeadm.go:310] 
	I1213 20:27:15.750594   78367 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1213 20:27:15.750679   78367 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1213 20:27:15.750750   78367 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1213 20:27:15.750838   78367 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1213 20:27:15.750867   78367 kubeadm.go:310] 
	I1213 20:27:15.751901   78367 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 20:27:15.752044   78367 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1213 20:27:15.752128   78367 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1213 20:27:15.752253   78367 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 20:27:15.752296   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 20:27:16.207985   78367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:27:16.221729   78367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:27:16.230896   78367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:27:16.230915   78367 kubeadm.go:157] found existing configuration files:
	
	I1213 20:27:16.230963   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:27:16.239780   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:27:16.239853   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:27:16.248841   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:27:16.257494   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:27:16.257547   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:27:16.266220   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:27:16.274395   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:27:16.274446   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:27:16.282941   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:27:16.291155   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:27:16.291206   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:27:16.299780   78367 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 20:27:16.492967   78367 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 20:29:12.537014   78367 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1213 20:29:12.537124   78367 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1213 20:29:12.538949   78367 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1213 20:29:12.539024   78367 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 20:29:12.539128   78367 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 20:29:12.539224   78367 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 20:29:12.539305   78367 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 20:29:12.539357   78367 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 20:29:12.540964   78367 out.go:235]   - Generating certificates and keys ...
	I1213 20:29:12.541051   78367 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 20:29:12.541164   78367 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 20:29:12.541297   78367 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 20:29:12.541385   78367 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1213 20:29:12.541510   78367 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 20:29:12.541593   78367 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1213 20:29:12.541696   78367 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1213 20:29:12.541764   78367 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1213 20:29:12.541825   78367 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 20:29:12.541886   78367 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 20:29:12.541918   78367 kubeadm.go:310] [certs] Using the existing "sa" key
	I1213 20:29:12.541993   78367 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 20:29:12.542062   78367 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 20:29:12.542141   78367 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 20:29:12.542249   78367 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 20:29:12.542337   78367 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 20:29:12.542454   78367 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 20:29:12.542564   78367 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 20:29:12.542608   78367 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 20:29:12.542689   78367 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 20:29:12.544295   78367 out.go:235]   - Booting up control plane ...
	I1213 20:29:12.544374   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 20:29:12.544440   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 20:29:12.544496   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 20:29:12.544566   78367 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 20:29:12.544708   78367 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 20:29:12.544763   78367 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1213 20:29:12.544822   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.544980   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545046   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.545210   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545282   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.545456   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545529   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.545681   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545742   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.545910   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545920   78367 kubeadm.go:310] 
	I1213 20:29:12.545956   78367 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1213 20:29:12.545989   78367 kubeadm.go:310] 		timed out waiting for the condition
	I1213 20:29:12.545999   78367 kubeadm.go:310] 
	I1213 20:29:12.546026   78367 kubeadm.go:310] 	This error is likely caused by:
	I1213 20:29:12.546053   78367 kubeadm.go:310] 		- The kubelet is not running
	I1213 20:29:12.546145   78367 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 20:29:12.546153   78367 kubeadm.go:310] 
	I1213 20:29:12.546246   78367 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 20:29:12.546317   78367 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1213 20:29:12.546377   78367 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1213 20:29:12.546386   78367 kubeadm.go:310] 
	I1213 20:29:12.546485   78367 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1213 20:29:12.546561   78367 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1213 20:29:12.546568   78367 kubeadm.go:310] 
	I1213 20:29:12.546677   78367 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1213 20:29:12.546761   78367 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1213 20:29:12.546831   78367 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1213 20:29:12.546913   78367 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1213 20:29:12.546942   78367 kubeadm.go:310] 
	I1213 20:29:12.546976   78367 kubeadm.go:394] duration metric: took 7m57.617019103s to StartCluster
	I1213 20:29:12.547025   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:29:12.547089   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:29:12.589567   78367 cri.go:89] found id: ""
	I1213 20:29:12.589592   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.589599   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:29:12.589605   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:29:12.589660   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:29:12.621414   78367 cri.go:89] found id: ""
	I1213 20:29:12.621438   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.621445   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:29:12.621450   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:29:12.621510   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:29:12.652624   78367 cri.go:89] found id: ""
	I1213 20:29:12.652655   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.652666   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:29:12.652674   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:29:12.652739   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:29:12.682651   78367 cri.go:89] found id: ""
	I1213 20:29:12.682683   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.682693   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:29:12.682701   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:29:12.682767   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:29:12.714100   78367 cri.go:89] found id: ""
	I1213 20:29:12.714127   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.714134   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:29:12.714140   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:29:12.714194   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:29:12.745402   78367 cri.go:89] found id: ""
	I1213 20:29:12.745436   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.745446   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:29:12.745454   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:29:12.745515   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:29:12.775916   78367 cri.go:89] found id: ""
	I1213 20:29:12.775942   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.775949   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:29:12.775954   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:29:12.776009   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:29:12.806128   78367 cri.go:89] found id: ""
	I1213 20:29:12.806161   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.806171   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:29:12.806183   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:29:12.806197   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:29:12.841122   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:29:12.841151   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:29:12.888169   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:29:12.888203   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:29:12.900707   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:29:12.900733   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:29:12.969370   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:29:12.969408   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:29:12.969423   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 20:29:13.074903   78367 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1213 20:29:13.074961   78367 out.go:270] * 
	* 
	W1213 20:29:13.075016   78367 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 20:29:13.075034   78367 out.go:270] * 
	* 
	W1213 20:29:13.075878   78367 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 20:29:13.079429   78367 out.go:201] 
	W1213 20:29:13.080898   78367 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 20:29:13.080953   78367 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 20:29:13.080984   78367 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 20:29:13.082622   78367 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-613355 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-613355 -n old-k8s-version-613355
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-613355 -n old-k8s-version-613355: exit status 2 (231.657316ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-613355 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-191190 image list                          | embed-certs-191190           | jenkins | v1.34.0 | 13 Dec 24 20:22 UTC | 13 Dec 24 20:22 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-191190                                  | embed-certs-191190           | jenkins | v1.34.0 | 13 Dec 24 20:22 UTC | 13 Dec 24 20:22 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-191190                                  | embed-certs-191190           | jenkins | v1.34.0 | 13 Dec 24 20:22 UTC | 13 Dec 24 20:22 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-191190                                  | embed-certs-191190           | jenkins | v1.34.0 | 13 Dec 24 20:22 UTC | 13 Dec 24 20:22 UTC |
	| delete  | -p embed-certs-191190                                  | embed-certs-191190           | jenkins | v1.34.0 | 13 Dec 24 20:22 UTC | 13 Dec 24 20:22 UTC |
	| start   | -p newest-cni-535459 --memory=2200 --alsologtostderr   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:22 UTC | 13 Dec 24 20:23 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-535459             | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:23 UTC | 13 Dec 24 20:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-535459                                   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:23 UTC | 13 Dec 24 20:23 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-535459                  | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:23 UTC | 13 Dec 24 20:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-535459 --memory=2200 --alsologtostderr   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:23 UTC | 13 Dec 24 20:24 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | no-preload-475934 image list                           | no-preload-475934            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-475934                                   | no-preload-475934            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-475934                                   | no-preload-475934            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| image   | newest-cni-535459 image list                           | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-535459                                   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-475934                                   | no-preload-475934            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	| delete  | -p no-preload-475934                                   | no-preload-475934            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	| unpause | -p newest-cni-535459                                   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-535459                                   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	| image   | default-k8s-diff-port-355668                           | default-k8s-diff-port-355668 | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-355668 | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | default-k8s-diff-port-355668                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-535459                                   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	| unpause | -p                                                     | default-k8s-diff-port-355668 | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | default-k8s-diff-port-355668                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-355668 | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | default-k8s-diff-port-355668                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-355668 | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | default-k8s-diff-port-355668                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 20:23:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 20:23:38.197995   79820 out.go:345] Setting OutFile to fd 1 ...
	I1213 20:23:38.198359   79820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:23:38.198412   79820 out.go:358] Setting ErrFile to fd 2...
	I1213 20:23:38.198430   79820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:23:38.198912   79820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
	I1213 20:23:38.199937   79820 out.go:352] Setting JSON to false
	I1213 20:23:38.200882   79820 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7561,"bootTime":1734113857,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 20:23:38.200969   79820 start.go:139] virtualization: kvm guest
	I1213 20:23:38.202746   79820 out.go:177] * [newest-cni-535459] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 20:23:38.204302   79820 notify.go:220] Checking for updates...
	I1213 20:23:38.204304   79820 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 20:23:38.205592   79820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 20:23:38.206687   79820 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:23:38.207863   79820 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 20:23:38.208920   79820 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 20:23:38.209928   79820 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 20:23:38.211390   79820 config.go:182] Loaded profile config "newest-cni-535459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:23:38.211789   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:38.211857   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:38.227106   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36295
	I1213 20:23:38.227528   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:38.228121   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:23:38.228141   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:38.228624   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:38.228802   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:38.229038   79820 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 20:23:38.229314   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:38.229353   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:38.244124   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I1213 20:23:38.244541   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:38.245118   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:23:38.245150   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:38.245472   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:38.245656   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:38.280882   79820 out.go:177] * Using the kvm2 driver based on existing profile
	I1213 20:23:38.282056   79820 start.go:297] selected driver: kvm2
	I1213 20:23:38.282071   79820 start.go:901] validating driver "kvm2" against &{Name:newest-cni-535459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:newest-cni-535459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s S
cheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:23:38.282177   79820 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 20:23:38.282946   79820 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 20:23:38.283023   79820 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20090-12353/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 20:23:38.297713   79820 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1213 20:23:38.298132   79820 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 20:23:38.298167   79820 cni.go:84] Creating CNI manager for ""
	I1213 20:23:38.298222   79820 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:23:38.298272   79820 start.go:340] cluster config:
	{Name:newest-cni-535459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-535459 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:23:38.298394   79820 iso.go:125] acquiring lock: {Name:mkd84f6661a5214d8c2d3a40ad448351a88bfd1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 20:23:38.299870   79820 out.go:177] * Starting "newest-cni-535459" primary control-plane node in "newest-cni-535459" cluster
	I1213 20:23:38.300922   79820 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 20:23:38.300954   79820 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1213 20:23:38.300961   79820 cache.go:56] Caching tarball of preloaded images
	I1213 20:23:38.301027   79820 preload.go:172] Found /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 20:23:38.301037   79820 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1213 20:23:38.301139   79820 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/config.json ...
	I1213 20:23:38.301353   79820 start.go:360] acquireMachinesLock for newest-cni-535459: {Name:mkc278ae0927dbec7538ca4f7c13001e5f3abc49 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 20:23:38.301405   79820 start.go:364] duration metric: took 31.317µs to acquireMachinesLock for "newest-cni-535459"
	I1213 20:23:38.301424   79820 start.go:96] Skipping create...Using existing machine configuration
	I1213 20:23:38.301434   79820 fix.go:54] fixHost starting: 
	I1213 20:23:38.301810   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:38.301846   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:38.316577   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46443
	I1213 20:23:38.317005   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:38.317449   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:23:38.317467   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:38.317793   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:38.317965   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:38.318117   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetState
	I1213 20:23:38.319590   79820 fix.go:112] recreateIfNeeded on newest-cni-535459: state=Stopped err=<nil>
	I1213 20:23:38.319614   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	W1213 20:23:38.319782   79820 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 20:23:38.321580   79820 out.go:177] * Restarting existing kvm2 VM for "newest-cni-535459" ...
	I1213 20:23:38.105462   77223 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.795842823s)
	I1213 20:23:38.105518   77223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:23:38.120268   77223 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 20:23:38.129684   77223 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:23:38.141849   77223 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:23:38.141869   77223 kubeadm.go:157] found existing configuration files:
	
	I1213 20:23:38.141910   77223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:23:38.150679   77223 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:23:38.150731   77223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:23:38.159954   77223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:23:38.168900   77223 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:23:38.168957   77223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:23:38.178775   77223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:23:38.187799   77223 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:23:38.187850   77223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:23:38.197158   77223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:23:38.206667   77223 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:23:38.206722   77223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:23:38.216276   77223 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 20:23:38.370967   77223 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 20:23:39.027955   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:39.041250   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:39.041315   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:39.083287   78367 cri.go:89] found id: ""
	I1213 20:23:39.083314   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.083324   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:39.083331   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:39.083384   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:39.125760   78367 cri.go:89] found id: ""
	I1213 20:23:39.125787   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.125798   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:39.125805   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:39.125857   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:39.159459   78367 cri.go:89] found id: ""
	I1213 20:23:39.159487   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.159497   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:39.159504   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:39.159557   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:39.194175   78367 cri.go:89] found id: ""
	I1213 20:23:39.194204   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.194211   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:39.194217   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:39.194265   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:39.228851   78367 cri.go:89] found id: ""
	I1213 20:23:39.228879   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.228889   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:39.228897   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:39.228948   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:39.266408   78367 cri.go:89] found id: ""
	I1213 20:23:39.266441   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.266452   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:39.266460   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:39.266505   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:39.303917   78367 cri.go:89] found id: ""
	I1213 20:23:39.303946   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.303957   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:39.303965   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:39.304024   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:39.337643   78367 cri.go:89] found id: ""
	I1213 20:23:39.337670   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.337680   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:39.337690   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:39.337707   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:39.394343   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:39.394375   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:39.411615   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:39.411645   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:39.484070   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:39.484095   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:39.484110   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:39.570207   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:39.570231   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:38.322621   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Start
	I1213 20:23:38.322783   79820 main.go:141] libmachine: (newest-cni-535459) starting domain...
	I1213 20:23:38.322806   79820 main.go:141] libmachine: (newest-cni-535459) ensuring networks are active...
	I1213 20:23:38.323533   79820 main.go:141] libmachine: (newest-cni-535459) Ensuring network default is active
	I1213 20:23:38.323827   79820 main.go:141] libmachine: (newest-cni-535459) Ensuring network mk-newest-cni-535459 is active
	I1213 20:23:38.324140   79820 main.go:141] libmachine: (newest-cni-535459) getting domain XML...
	I1213 20:23:38.324747   79820 main.go:141] libmachine: (newest-cni-535459) creating domain...
	I1213 20:23:39.564073   79820 main.go:141] libmachine: (newest-cni-535459) waiting for IP...
	I1213 20:23:39.565035   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:39.565551   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:39.565617   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:39.565533   79856 retry.go:31] will retry after 298.228952ms: waiting for domain to come up
	I1213 20:23:39.865149   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:39.865713   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:39.865742   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:39.865696   79856 retry.go:31] will retry after 251.6627ms: waiting for domain to come up
	I1213 20:23:40.119294   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:40.119854   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:40.119884   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:40.119834   79856 retry.go:31] will retry after 300.482126ms: waiting for domain to come up
	I1213 20:23:40.422534   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:40.423263   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:40.423290   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:40.423228   79856 retry.go:31] will retry after 512.35172ms: waiting for domain to come up
	I1213 20:23:40.936920   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:40.937508   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:40.937541   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:40.937492   79856 retry.go:31] will retry after 706.292926ms: waiting for domain to come up
	I1213 20:23:41.645625   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:41.646229   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:41.646365   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:41.646289   79856 retry.go:31] will retry after 925.304714ms: waiting for domain to come up
	I1213 20:23:42.572832   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:42.573505   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:42.573551   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:42.573492   79856 retry.go:31] will retry after 784.905312ms: waiting for domain to come up
	I1213 20:23:44.821257   77510 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.710060568s)
	I1213 20:23:44.821343   77510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:23:44.851774   77510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 20:23:44.867597   77510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:23:44.882988   77510 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:23:44.883012   77510 kubeadm.go:157] found existing configuration files:
	
	I1213 20:23:44.883061   77510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1213 20:23:44.897859   77510 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:23:44.897930   77510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:23:44.930490   77510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1213 20:23:44.940775   77510 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:23:44.940832   77510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:23:44.949814   77510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1213 20:23:44.958792   77510 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:23:44.958864   77510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:23:44.967799   77510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1213 20:23:44.976918   77510 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:23:44.976978   77510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:23:44.985827   77510 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 20:23:45.032679   77510 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1213 20:23:45.032823   77510 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 20:23:45.154457   77510 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 20:23:45.154613   77510 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 20:23:45.154753   77510 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 20:23:45.168560   77510 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 20:23:45.170392   77510 out.go:235]   - Generating certificates and keys ...
	I1213 20:23:45.170484   77510 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 20:23:45.170567   77510 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 20:23:45.170671   77510 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 20:23:45.170773   77510 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1213 20:23:45.170895   77510 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 20:23:45.175078   77510 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1213 20:23:45.175301   77510 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1213 20:23:45.175631   77510 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1213 20:23:45.175826   77510 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 20:23:45.176621   77510 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 20:23:45.176938   77510 kubeadm.go:310] [certs] Using the existing "sa" key
	I1213 20:23:45.177096   77510 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 20:23:45.425420   77510 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 20:23:45.744337   77510 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 20:23:46.051697   77510 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 20:23:46.134768   77510 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 20:23:46.244436   77510 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 20:23:46.245253   77510 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 20:23:46.248609   77510 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 20:23:46.425197   77223 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1213 20:23:46.425300   77223 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 20:23:46.425412   77223 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 20:23:46.425543   77223 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 20:23:46.425669   77223 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 20:23:46.425751   77223 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 20:23:46.427622   77223 out.go:235]   - Generating certificates and keys ...
	I1213 20:23:46.427725   77223 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 20:23:46.427829   77223 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 20:23:46.427918   77223 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 20:23:46.428011   77223 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1213 20:23:46.428119   77223 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 20:23:46.428197   77223 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1213 20:23:46.428286   77223 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1213 20:23:46.428363   77223 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1213 20:23:46.428447   77223 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 20:23:46.428558   77223 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 20:23:46.428626   77223 kubeadm.go:310] [certs] Using the existing "sa" key
	I1213 20:23:46.428704   77223 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 20:23:46.428791   77223 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 20:23:46.428896   77223 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 20:23:46.428988   77223 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 20:23:46.429081   77223 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 20:23:46.429176   77223 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 20:23:46.429297   77223 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 20:23:46.429377   77223 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 20:23:46.430801   77223 out.go:235]   - Booting up control plane ...
	I1213 20:23:46.430919   77223 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 20:23:46.431003   77223 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 20:23:46.431082   77223 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 20:23:46.431200   77223 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 20:23:46.431334   77223 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 20:23:46.431408   77223 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 20:23:46.431609   77223 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 20:23:46.431761   77223 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 20:23:46.431850   77223 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.304495ms
	I1213 20:23:46.432010   77223 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1213 20:23:46.432103   77223 kubeadm.go:310] [api-check] The API server is healthy after 5.002258285s
	I1213 20:23:46.432266   77223 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 20:23:46.432423   77223 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 20:23:46.432498   77223 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 20:23:46.432678   77223 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-475934 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 20:23:46.432749   77223 kubeadm.go:310] [bootstrap-token] Using token: ztynho.1kbaokhemrbxet6k
	I1213 20:23:46.434022   77223 out.go:235]   - Configuring RBAC rules ...
	I1213 20:23:46.434143   77223 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 20:23:46.434228   77223 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 20:23:46.434361   77223 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 20:23:46.434498   77223 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 20:23:46.434622   77223 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 20:23:46.434723   77223 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 20:23:46.434870   77223 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 20:23:46.434940   77223 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1213 20:23:46.435004   77223 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1213 20:23:46.435013   77223 kubeadm.go:310] 
	I1213 20:23:46.435096   77223 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1213 20:23:46.435109   77223 kubeadm.go:310] 
	I1213 20:23:46.435171   77223 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1213 20:23:46.435177   77223 kubeadm.go:310] 
	I1213 20:23:46.435197   77223 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1213 20:23:46.435248   77223 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 20:23:46.435294   77223 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 20:23:46.435300   77223 kubeadm.go:310] 
	I1213 20:23:46.435352   77223 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1213 20:23:46.435363   77223 kubeadm.go:310] 
	I1213 20:23:46.435402   77223 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 20:23:46.435408   77223 kubeadm.go:310] 
	I1213 20:23:46.435455   77223 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1213 20:23:46.435519   77223 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 20:23:46.435617   77223 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 20:23:46.435639   77223 kubeadm.go:310] 
	I1213 20:23:46.435750   77223 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 20:23:46.435854   77223 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1213 20:23:46.435869   77223 kubeadm.go:310] 
	I1213 20:23:46.435980   77223 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ztynho.1kbaokhemrbxet6k \
	I1213 20:23:46.436148   77223 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b927cc699f96ad11d9aa77520496913d5873f96a2e411ce1bcbe6def5a1747ad \
	I1213 20:23:46.436179   77223 kubeadm.go:310] 	--control-plane 
	I1213 20:23:46.436189   77223 kubeadm.go:310] 
	I1213 20:23:46.436310   77223 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1213 20:23:46.436321   77223 kubeadm.go:310] 
	I1213 20:23:46.436460   77223 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ztynho.1kbaokhemrbxet6k \
	I1213 20:23:46.436635   77223 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b927cc699f96ad11d9aa77520496913d5873f96a2e411ce1bcbe6def5a1747ad 
	I1213 20:23:46.436652   77223 cni.go:84] Creating CNI manager for ""
	I1213 20:23:46.436659   77223 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:23:46.438047   77223 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 20:23:42.109283   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:42.126005   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:42.126094   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:42.169463   78367 cri.go:89] found id: ""
	I1213 20:23:42.169494   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.169505   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:42.169512   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:42.169573   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:42.214207   78367 cri.go:89] found id: ""
	I1213 20:23:42.214237   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.214248   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:42.214265   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:42.214327   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:42.255998   78367 cri.go:89] found id: ""
	I1213 20:23:42.256030   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.256041   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:42.256049   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:42.256104   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:42.295578   78367 cri.go:89] found id: ""
	I1213 20:23:42.295607   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.295618   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:42.295625   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:42.295686   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:42.336462   78367 cri.go:89] found id: ""
	I1213 20:23:42.336489   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.336501   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:42.336509   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:42.336568   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:42.377959   78367 cri.go:89] found id: ""
	I1213 20:23:42.377987   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.377998   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:42.378020   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:42.378083   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:42.421761   78367 cri.go:89] found id: ""
	I1213 20:23:42.421790   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.421799   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:42.421807   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:42.421866   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:42.456346   78367 cri.go:89] found id: ""
	I1213 20:23:42.456373   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.456387   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:42.456397   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:42.456411   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:42.472200   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:42.472241   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:42.544913   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:42.544938   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:42.544954   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:42.646820   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:42.646869   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:42.685374   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:42.685411   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:45.244342   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:45.257131   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:45.257210   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:45.291023   78367 cri.go:89] found id: ""
	I1213 20:23:45.291064   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.291072   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:45.291085   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:45.291145   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:45.322469   78367 cri.go:89] found id: ""
	I1213 20:23:45.322499   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.322509   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:45.322516   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:45.322574   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:45.364647   78367 cri.go:89] found id: ""
	I1213 20:23:45.364679   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.364690   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:45.364696   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:45.364754   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:45.406124   78367 cri.go:89] found id: ""
	I1213 20:23:45.406151   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.406161   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:45.406169   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:45.406229   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:45.449418   78367 cri.go:89] found id: ""
	I1213 20:23:45.449442   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.449450   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:45.449456   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:45.449513   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:45.491190   78367 cri.go:89] found id: ""
	I1213 20:23:45.491221   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.491231   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:45.491239   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:45.491312   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:45.537336   78367 cri.go:89] found id: ""
	I1213 20:23:45.537365   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.537375   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:45.537383   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:45.537442   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:45.574826   78367 cri.go:89] found id: ""
	I1213 20:23:45.574873   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.574884   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:45.574897   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:45.574911   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:45.656859   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:45.656900   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:45.671183   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:45.671211   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:45.748645   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:45.748670   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:45.748684   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:45.861549   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:45.861598   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:43.360177   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:43.360711   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:43.360749   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:43.360702   79856 retry.go:31] will retry after 910.256009ms: waiting for domain to come up
	I1213 20:23:44.272014   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:44.272526   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:44.272555   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:44.272488   79856 retry.go:31] will retry after 1.534434138s: waiting for domain to come up
	I1213 20:23:45.809190   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:45.809761   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:45.809786   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:45.809755   79856 retry.go:31] will retry after 2.307546799s: waiting for domain to come up
	I1213 20:23:48.120134   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:48.120663   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:48.120688   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:48.120620   79856 retry.go:31] will retry after 2.815296829s: waiting for domain to come up
	I1213 20:23:46.250264   77510 out.go:235]   - Booting up control plane ...
	I1213 20:23:46.250387   77510 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 20:23:46.250522   77510 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 20:23:46.250655   77510 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 20:23:46.274127   77510 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 20:23:46.280501   77510 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 20:23:46.280570   77510 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 20:23:46.407152   77510 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 20:23:46.407342   77510 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 20:23:46.909234   77510 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.289561ms
	I1213 20:23:46.909341   77510 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1213 20:23:46.439167   77223 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 20:23:46.452642   77223 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 20:23:46.478384   77223 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 20:23:46.478435   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:46.478467   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-475934 minikube.k8s.io/updated_at=2024_12_13T20_23_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956 minikube.k8s.io/name=no-preload-475934 minikube.k8s.io/primary=true
	I1213 20:23:46.497425   77223 ops.go:34] apiserver oom_adj: -16
	I1213 20:23:46.697773   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:47.198632   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:47.697921   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:48.198923   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:48.697941   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:49.198682   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:49.698572   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:50.198476   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:50.698077   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:50.793538   77223 kubeadm.go:1113] duration metric: took 4.315156477s to wait for elevateKubeSystemPrivileges
	I1213 20:23:50.793579   77223 kubeadm.go:394] duration metric: took 5m1.991513079s to StartCluster
	I1213 20:23:50.793600   77223 settings.go:142] acquiring lock: {Name:mkc90da34b53323b31b6e69f8fab5ad7b1bdb254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:23:50.793686   77223 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:23:50.795098   77223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/kubeconfig: {Name:mkeeacf16d2513309766df13b67a96dd252bc4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:23:50.795375   77223 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.128 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 20:23:50.795446   77223 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 20:23:50.795546   77223 addons.go:69] Setting storage-provisioner=true in profile "no-preload-475934"
	I1213 20:23:50.795565   77223 addons.go:234] Setting addon storage-provisioner=true in "no-preload-475934"
	W1213 20:23:50.795574   77223 addons.go:243] addon storage-provisioner should already be in state true
	I1213 20:23:50.795605   77223 host.go:66] Checking if "no-preload-475934" exists ...
	I1213 20:23:50.795621   77223 config.go:182] Loaded profile config "no-preload-475934": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:23:50.795673   77223 addons.go:69] Setting default-storageclass=true in profile "no-preload-475934"
	I1213 20:23:50.795698   77223 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-475934"
	I1213 20:23:50.796066   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.796080   77223 addons.go:69] Setting dashboard=true in profile "no-preload-475934"
	I1213 20:23:50.796098   77223 addons.go:234] Setting addon dashboard=true in "no-preload-475934"
	I1213 20:23:50.796100   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	W1213 20:23:50.796105   77223 addons.go:243] addon dashboard should already be in state true
	I1213 20:23:50.796129   77223 host.go:66] Checking if "no-preload-475934" exists ...
	I1213 20:23:50.796167   77223 addons.go:69] Setting metrics-server=true in profile "no-preload-475934"
	I1213 20:23:50.796187   77223 addons.go:234] Setting addon metrics-server=true in "no-preload-475934"
	W1213 20:23:50.796195   77223 addons.go:243] addon metrics-server should already be in state true
	I1213 20:23:50.796223   77223 host.go:66] Checking if "no-preload-475934" exists ...
	I1213 20:23:50.796066   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.796371   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.796476   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.796502   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.796625   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.796665   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.802558   77223 out.go:177] * Verifying Kubernetes components...
	I1213 20:23:50.804240   77223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:23:50.815506   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40753
	I1213 20:23:50.815508   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43379
	I1213 20:23:50.815849   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I1213 20:23:50.816023   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.816131   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.816355   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.816463   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42859
	I1213 20:23:50.816587   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.816610   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.816711   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.816731   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.816857   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.816968   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.817049   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.817074   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.817091   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.817187   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetState
	I1213 20:23:50.817334   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.817353   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.817814   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.817854   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.818079   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.818094   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.818681   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.818685   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.818721   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.818756   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.839237   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36455
	I1213 20:23:50.855736   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.856284   77223 addons.go:234] Setting addon default-storageclass=true in "no-preload-475934"
	W1213 20:23:50.856308   77223 addons.go:243] addon default-storageclass should already be in state true
	I1213 20:23:50.856341   77223 host.go:66] Checking if "no-preload-475934" exists ...
	I1213 20:23:50.856381   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.856404   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.856715   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.856733   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.856757   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.857004   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetState
	I1213 20:23:50.859133   77223 main.go:141] libmachine: (no-preload-475934) Calling .DriverName
	I1213 20:23:50.861074   77223 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 20:23:50.862375   77223 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1213 20:23:50.863494   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 20:23:50.863514   77223 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 20:23:50.863535   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHHostname
	I1213 20:23:50.874249   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHPort
	I1213 20:23:50.874355   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.874381   77223 main.go:141] libmachine: (no-preload-475934) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a1:3e", ip: ""} in network mk-no-preload-475934: {Iface:virbr4 ExpiryTime:2024-12-13 21:18:22 +0000 UTC Type:0 Mac:52:54:00:b3:a1:3e Iaid: IPaddr:192.168.61.128 Prefix:24 Hostname:no-preload-475934 Clientid:01:52:54:00:b3:a1:3e}
	I1213 20:23:50.874406   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined IP address 192.168.61.128 and MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.874481   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHKeyPath
	I1213 20:23:50.874755   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHUsername
	I1213 20:23:50.875083   77223 sshutil.go:53] new ssh client: &{IP:192.168.61.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/no-preload-475934/id_rsa Username:docker}
	I1213 20:23:50.876889   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I1213 20:23:50.876927   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45279
	I1213 20:23:50.877256   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39049
	I1213 20:23:50.877531   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.877577   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.877899   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.878141   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.878154   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.878167   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.878170   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.878413   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.878435   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.878483   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.878527   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.878869   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetState
	I1213 20:23:50.878879   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.878893   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetState
	I1213 20:23:50.879461   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.879507   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.880758   77223 main.go:141] libmachine: (no-preload-475934) Calling .DriverName
	I1213 20:23:50.881011   77223 main.go:141] libmachine: (no-preload-475934) Calling .DriverName
	I1213 20:23:50.882329   77223 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:23:50.882392   77223 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 20:23:50.883529   77223 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:23:50.883551   77223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 20:23:50.883911   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHHostname
	I1213 20:23:50.884480   77223 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 20:23:50.884501   77223 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 20:23:50.884518   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHHostname
	I1213 20:23:50.888177   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.888302   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.888537   77223 main.go:141] libmachine: (no-preload-475934) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a1:3e", ip: ""} in network mk-no-preload-475934: {Iface:virbr4 ExpiryTime:2024-12-13 21:18:22 +0000 UTC Type:0 Mac:52:54:00:b3:a1:3e Iaid: IPaddr:192.168.61.128 Prefix:24 Hostname:no-preload-475934 Clientid:01:52:54:00:b3:a1:3e}
	I1213 20:23:50.888583   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined IP address 192.168.61.128 and MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.888850   77223 main.go:141] libmachine: (no-preload-475934) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a1:3e", ip: ""} in network mk-no-preload-475934: {Iface:virbr4 ExpiryTime:2024-12-13 21:18:22 +0000 UTC Type:0 Mac:52:54:00:b3:a1:3e Iaid: IPaddr:192.168.61.128 Prefix:24 Hostname:no-preload-475934 Clientid:01:52:54:00:b3:a1:3e}
	I1213 20:23:50.888867   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHPort
	I1213 20:23:50.888870   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined IP address 192.168.61.128 and MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.889051   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHPort
	I1213 20:23:50.889070   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHKeyPath
	I1213 20:23:50.889186   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHUsername
	I1213 20:23:50.889244   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHKeyPath
	I1213 20:23:50.889291   77223 sshutil.go:53] new ssh client: &{IP:192.168.61.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/no-preload-475934/id_rsa Username:docker}
	I1213 20:23:50.889578   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHUsername
	I1213 20:23:50.889741   77223 sshutil.go:53] new ssh client: &{IP:192.168.61.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/no-preload-475934/id_rsa Username:docker}
	I1213 20:23:50.900416   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I1213 20:23:50.904150   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.904681   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.904710   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.905101   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.905353   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetState
	I1213 20:23:50.907076   77223 main.go:141] libmachine: (no-preload-475934) Calling .DriverName
	I1213 20:23:50.907309   77223 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 20:23:50.907327   77223 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 20:23:50.907346   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHHostname
	I1213 20:23:50.913266   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.913676   77223 main.go:141] libmachine: (no-preload-475934) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a1:3e", ip: ""} in network mk-no-preload-475934: {Iface:virbr4 ExpiryTime:2024-12-13 21:18:22 +0000 UTC Type:0 Mac:52:54:00:b3:a1:3e Iaid: IPaddr:192.168.61.128 Prefix:24 Hostname:no-preload-475934 Clientid:01:52:54:00:b3:a1:3e}
	I1213 20:23:50.913698   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined IP address 192.168.61.128 and MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.913923   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHPort
	I1213 20:23:50.914129   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHKeyPath
	I1213 20:23:50.914296   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHUsername
	I1213 20:23:50.914481   77223 sshutil.go:53] new ssh client: &{IP:192.168.61.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/no-preload-475934/id_rsa Username:docker}
	I1213 20:23:51.062632   77223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 20:23:51.080757   77223 node_ready.go:35] waiting up to 6m0s for node "no-preload-475934" to be "Ready" ...
	I1213 20:23:51.096457   77223 node_ready.go:49] node "no-preload-475934" has status "Ready":"True"
	I1213 20:23:51.096488   77223 node_ready.go:38] duration metric: took 15.695926ms for node "no-preload-475934" to be "Ready" ...
	I1213 20:23:51.096501   77223 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 20:23:51.101069   77223 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:51.153214   77223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 20:23:51.201828   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 20:23:51.201861   77223 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 20:23:51.257276   77223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:23:51.286719   77223 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 20:23:51.286743   77223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 20:23:48.414982   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:48.431396   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:48.431482   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:48.476067   78367 cri.go:89] found id: ""
	I1213 20:23:48.476112   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.476124   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:48.476131   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:48.476194   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:48.517216   78367 cri.go:89] found id: ""
	I1213 20:23:48.517258   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.517269   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:48.517277   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:48.517381   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:48.562993   78367 cri.go:89] found id: ""
	I1213 20:23:48.563092   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.563117   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:48.563135   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:48.563223   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:48.604109   78367 cri.go:89] found id: ""
	I1213 20:23:48.604202   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.604224   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:48.604250   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:48.604348   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:48.651185   78367 cri.go:89] found id: ""
	I1213 20:23:48.651219   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.651230   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:48.651238   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:48.651317   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:48.695266   78367 cri.go:89] found id: ""
	I1213 20:23:48.695305   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.695317   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:48.695325   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:48.695389   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:48.741459   78367 cri.go:89] found id: ""
	I1213 20:23:48.741495   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.741506   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:48.741513   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:48.741573   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:48.785599   78367 cri.go:89] found id: ""
	I1213 20:23:48.785684   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.785701   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:48.785716   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:48.785744   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:48.845741   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:48.845777   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:48.862971   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:48.863013   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:48.934300   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:48.934328   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:48.934344   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:49.023110   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:49.023154   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:51.562149   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:51.580078   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:51.580154   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:51.624644   78367 cri.go:89] found id: ""
	I1213 20:23:51.624677   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.624688   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:51.624696   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:51.624756   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:51.910904   77510 kubeadm.go:310] [api-check] The API server is healthy after 5.001533218s
	I1213 20:23:51.928221   77510 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 20:23:51.955180   77510 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 20:23:51.988925   77510 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 20:23:51.989201   77510 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-355668 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 20:23:52.006352   77510 kubeadm.go:310] [bootstrap-token] Using token: 62dvzj.gok594hxuxcynd4x
	I1213 20:23:50.939565   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:50.940051   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:50.940081   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:50.940008   79856 retry.go:31] will retry after 2.96641877s: waiting for domain to come up
	I1213 20:23:51.311455   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 20:23:51.311485   77223 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 20:23:51.369375   77223 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 20:23:51.369403   77223 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 20:23:51.424081   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 20:23:51.424111   77223 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 20:23:51.425876   77223 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 20:23:51.425896   77223 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 20:23:51.467889   77223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 20:23:51.513308   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 20:23:51.513340   77223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 20:23:51.601978   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 20:23:51.602009   77223 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 20:23:51.627122   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:51.627201   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:51.627580   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:51.629153   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:51.629172   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:51.629183   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:51.629191   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:51.629445   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:51.629463   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:51.629473   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:51.641253   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:51.641282   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:51.641576   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:51.641592   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:51.641593   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:51.656503   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 20:23:51.656529   77223 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 20:23:51.736524   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 20:23:51.736554   77223 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 20:23:51.766699   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 20:23:51.766786   77223 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 20:23:51.801572   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 20:23:51.801601   77223 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 20:23:51.819179   77223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 20:23:52.110163   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:52.110190   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:52.110480   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:52.110500   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:52.110507   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:52.110514   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:52.110508   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:52.113643   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:52.113667   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:52.113674   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:52.551336   77223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.08338913s)
	I1213 20:23:52.551397   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:52.551410   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:52.551700   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:52.551721   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:52.551731   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:52.551739   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:52.551951   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:52.552000   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:52.552008   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:52.552025   77223 addons.go:475] Verifying addon metrics-server=true in "no-preload-475934"
	I1213 20:23:53.145015   77223 pod_ready.go:103] pod "etcd-no-preload-475934" in "kube-system" namespace has status "Ready":"False"
	I1213 20:23:53.262929   77223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.44371085s)
	I1213 20:23:53.262987   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:53.263007   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:53.263335   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:53.263355   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:53.263365   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:53.263373   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:53.263380   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:53.263640   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:53.263680   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:53.263688   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:53.265176   77223 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-475934 addons enable metrics-server
	
	I1213 20:23:53.266358   77223 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1213 20:23:52.007746   77510 out.go:235]   - Configuring RBAC rules ...
	I1213 20:23:52.007914   77510 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 20:23:52.022398   77510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 20:23:52.033846   77510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 20:23:52.038811   77510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 20:23:52.052112   77510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 20:23:52.068899   77510 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 20:23:52.319919   77510 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 20:23:52.804645   77510 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1213 20:23:53.320002   77510 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1213 20:23:53.321529   77510 kubeadm.go:310] 
	I1213 20:23:53.321648   77510 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1213 20:23:53.321684   77510 kubeadm.go:310] 
	I1213 20:23:53.321797   77510 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1213 20:23:53.321809   77510 kubeadm.go:310] 
	I1213 20:23:53.321843   77510 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1213 20:23:53.321931   77510 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 20:23:53.322014   77510 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 20:23:53.322039   77510 kubeadm.go:310] 
	I1213 20:23:53.322140   77510 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1213 20:23:53.322154   77510 kubeadm.go:310] 
	I1213 20:23:53.322237   77510 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 20:23:53.322253   77510 kubeadm.go:310] 
	I1213 20:23:53.322327   77510 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1213 20:23:53.322439   77510 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 20:23:53.322505   77510 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 20:23:53.322511   77510 kubeadm.go:310] 
	I1213 20:23:53.322642   77510 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 20:23:53.322757   77510 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1213 20:23:53.322771   77510 kubeadm.go:310] 
	I1213 20:23:53.322937   77510 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 62dvzj.gok594hxuxcynd4x \
	I1213 20:23:53.323079   77510 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b927cc699f96ad11d9aa77520496913d5873f96a2e411ce1bcbe6def5a1747ad \
	I1213 20:23:53.323132   77510 kubeadm.go:310] 	--control-plane 
	I1213 20:23:53.323149   77510 kubeadm.go:310] 
	I1213 20:23:53.323269   77510 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1213 20:23:53.323280   77510 kubeadm.go:310] 
	I1213 20:23:53.323407   77510 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 62dvzj.gok594hxuxcynd4x \
	I1213 20:23:53.323556   77510 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b927cc699f96ad11d9aa77520496913d5873f96a2e411ce1bcbe6def5a1747ad 
	I1213 20:23:53.324551   77510 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 20:23:53.324579   77510 cni.go:84] Creating CNI manager for ""
	I1213 20:23:53.324591   77510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:23:53.326071   77510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 20:23:53.327260   77510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 20:23:53.338245   77510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 20:23:53.359781   77510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 20:23:53.359954   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:53.360067   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-355668 minikube.k8s.io/updated_at=2024_12_13T20_23_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956 minikube.k8s.io/name=default-k8s-diff-port-355668 minikube.k8s.io/primary=true
	I1213 20:23:53.378620   77510 ops.go:34] apiserver oom_adj: -16
	I1213 20:23:53.595107   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:54.095889   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:54.596033   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:53.267500   77223 addons.go:510] duration metric: took 2.472063966s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1213 20:23:55.608441   77223 pod_ready.go:103] pod "etcd-no-preload-475934" in "kube-system" namespace has status "Ready":"False"
	I1213 20:23:51.673392   78367 cri.go:89] found id: ""
	I1213 20:23:51.673421   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.673432   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:51.673440   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:51.673501   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:51.721445   78367 cri.go:89] found id: ""
	I1213 20:23:51.721472   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.721480   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:51.721488   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:51.721544   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:51.755079   78367 cri.go:89] found id: ""
	I1213 20:23:51.755112   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.755123   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:51.755131   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:51.755194   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:51.796420   78367 cri.go:89] found id: ""
	I1213 20:23:51.796457   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.796470   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:51.796478   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:51.796542   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:51.830054   78367 cri.go:89] found id: ""
	I1213 20:23:51.830080   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.830090   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:51.830098   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:51.830153   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:51.867546   78367 cri.go:89] found id: ""
	I1213 20:23:51.867574   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.867584   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:51.867592   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:51.867653   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:51.911804   78367 cri.go:89] found id: ""
	I1213 20:23:51.911830   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.911841   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:51.911853   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:51.911867   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:51.981311   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:51.981340   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:51.997948   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:51.997995   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:52.078493   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:52.078526   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:52.078541   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:52.181165   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:52.181213   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:54.728341   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:54.742062   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:54.742122   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:54.779920   78367 cri.go:89] found id: ""
	I1213 20:23:54.779947   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.779958   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:54.779966   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:54.780021   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:54.813600   78367 cri.go:89] found id: ""
	I1213 20:23:54.813631   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.813641   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:54.813649   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:54.813711   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:54.846731   78367 cri.go:89] found id: ""
	I1213 20:23:54.846761   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.846771   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:54.846778   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:54.846837   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:54.878598   78367 cri.go:89] found id: ""
	I1213 20:23:54.878628   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.878638   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:54.878646   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:54.878706   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:54.914259   78367 cri.go:89] found id: ""
	I1213 20:23:54.914293   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.914304   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:54.914318   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:54.914383   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:54.947232   78367 cri.go:89] found id: ""
	I1213 20:23:54.947264   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.947275   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:54.947283   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:54.947350   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:54.992079   78367 cri.go:89] found id: ""
	I1213 20:23:54.992108   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.992118   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:54.992125   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:54.992184   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:55.035067   78367 cri.go:89] found id: ""
	I1213 20:23:55.035093   78367 logs.go:282] 0 containers: []
	W1213 20:23:55.035100   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:55.035109   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:55.035122   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:55.108198   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:55.108224   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:55.108238   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:55.197303   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:55.197333   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:55.248131   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:55.248154   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:55.301605   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:55.301635   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:53.907724   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:53.908424   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:53.908470   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:53.908391   79856 retry.go:31] will retry after 4.35778362s: waiting for domain to come up
	I1213 20:23:55.095857   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:55.595908   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:56.095409   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:56.595238   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:57.095945   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:57.595757   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:58.095963   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:58.198049   77510 kubeadm.go:1113] duration metric: took 4.838144553s to wait for elevateKubeSystemPrivileges
	I1213 20:23:58.198082   77510 kubeadm.go:394] duration metric: took 5m1.770847274s to StartCluster
	I1213 20:23:58.198102   77510 settings.go:142] acquiring lock: {Name:mkc90da34b53323b31b6e69f8fab5ad7b1bdb254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:23:58.198176   77510 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:23:58.199549   77510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/kubeconfig: {Name:mkeeacf16d2513309766df13b67a96dd252bc4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:23:58.199800   77510 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.233 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 20:23:58.199963   77510 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 20:23:58.200086   77510 config.go:182] Loaded profile config "default-k8s-diff-port-355668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:23:58.200131   77510 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-355668"
	I1213 20:23:58.200150   77510 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-355668"
	W1213 20:23:58.200166   77510 addons.go:243] addon storage-provisioner should already be in state true
	I1213 20:23:58.200189   77510 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-355668"
	I1213 20:23:58.200199   77510 host.go:66] Checking if "default-k8s-diff-port-355668" exists ...
	I1213 20:23:58.200211   77510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-355668"
	I1213 20:23:58.200610   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.200626   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.200639   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.200656   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.200712   77510 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-355668"
	I1213 20:23:58.200712   77510 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-355668"
	I1213 20:23:58.200725   77510 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-355668"
	W1213 20:23:58.200732   77510 addons.go:243] addon dashboard should already be in state true
	I1213 20:23:58.200733   77510 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-355668"
	W1213 20:23:58.200742   77510 addons.go:243] addon metrics-server should already be in state true
	I1213 20:23:58.200754   77510 host.go:66] Checking if "default-k8s-diff-port-355668" exists ...
	I1213 20:23:58.200771   77510 host.go:66] Checking if "default-k8s-diff-port-355668" exists ...
	I1213 20:23:58.205916   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.205937   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.205960   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.205976   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.206755   77510 out.go:177] * Verifying Kubernetes components...
	I1213 20:23:58.208075   77510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:23:58.223074   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35975
	I1213 20:23:58.223694   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.224155   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.224170   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.224674   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.224863   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetState
	I1213 20:23:58.226583   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45263
	I1213 20:23:58.227150   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.227693   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.227712   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.228163   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.228437   77510 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-355668"
	W1213 20:23:58.228457   77510 addons.go:243] addon default-storageclass should already be in state true
	I1213 20:23:58.228483   77510 host.go:66] Checking if "default-k8s-diff-port-355668" exists ...
	I1213 20:23:58.228838   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.228847   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.228871   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.228882   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.238833   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35317
	I1213 20:23:58.245605   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44253
	I1213 20:23:58.246100   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.246630   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.246648   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.247050   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.247623   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.247662   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.249751   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46067
	I1213 20:23:58.250222   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.250772   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.250789   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.254939   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I1213 20:23:58.254977   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.254944   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.255395   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetState
	I1213 20:23:58.255455   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.255928   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.255944   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.256275   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.256811   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.256843   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.258976   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .DriverName
	I1213 20:23:58.259498   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.259515   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.260075   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.260720   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.260752   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.261030   77510 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 20:23:58.262210   77510 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 20:23:58.262229   77510 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 20:23:58.262248   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHHostname
	I1213 20:23:58.265414   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.266021   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:ab:46", ip: ""} in network mk-default-k8s-diff-port-355668: {Iface:virbr1 ExpiryTime:2024-12-13 21:18:42 +0000 UTC Type:0 Mac:52:54:00:22:ab:46 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:default-k8s-diff-port-355668 Clientid:01:52:54:00:22:ab:46}
	I1213 20:23:58.266045   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined IP address 192.168.39.233 and MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.266278   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHPort
	I1213 20:23:58.266441   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHKeyPath
	I1213 20:23:58.266627   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHUsername
	I1213 20:23:58.266776   77510 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/default-k8s-diff-port-355668/id_rsa Username:docker}
	I1213 20:23:58.268367   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32839
	I1213 20:23:58.269174   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.270087   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.270108   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.270905   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.271343   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetState
	I1213 20:23:58.278504   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41687
	I1213 20:23:58.279047   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.279669   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.279685   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.280236   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.280583   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetState
	I1213 20:23:58.281949   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43145
	I1213 20:23:58.282310   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.283003   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.283020   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.283408   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.286964   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .DriverName
	I1213 20:23:58.286998   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .DriverName
	I1213 20:23:58.287032   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetState
	I1213 20:23:58.287233   77510 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 20:23:58.287250   77510 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 20:23:58.287276   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHHostname
	I1213 20:23:58.288987   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .DriverName
	I1213 20:23:58.289809   77510 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 20:23:58.290685   77510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:23:58.292753   77510 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:23:58.292774   77510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 20:23:58.292792   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHHostname
	I1213 20:23:58.292849   77510 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1213 20:23:56.611155   77223 pod_ready.go:93] pod "etcd-no-preload-475934" in "kube-system" namespace has status "Ready":"True"
	I1213 20:23:56.611190   77223 pod_ready.go:82] duration metric: took 5.510087654s for pod "etcd-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:56.611203   77223 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:57.116912   77223 pod_ready.go:93] pod "kube-apiserver-no-preload-475934" in "kube-system" namespace has status "Ready":"True"
	I1213 20:23:57.116945   77223 pod_ready.go:82] duration metric: took 505.733979ms for pod "kube-apiserver-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:57.116958   77223 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:57.121384   77223 pod_ready.go:93] pod "kube-controller-manager-no-preload-475934" in "kube-system" namespace has status "Ready":"True"
	I1213 20:23:57.121411   77223 pod_ready.go:82] duration metric: took 4.445498ms for pod "kube-controller-manager-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:57.121425   77223 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:59.129454   77223 pod_ready.go:103] pod "kube-scheduler-no-preload-475934" in "kube-system" namespace has status "Ready":"False"
	I1213 20:23:59.662780   77223 pod_ready.go:93] pod "kube-scheduler-no-preload-475934" in "kube-system" namespace has status "Ready":"True"
	I1213 20:23:59.662813   77223 pod_ready.go:82] duration metric: took 2.541378671s for pod "kube-scheduler-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:59.662828   77223 pod_ready.go:39] duration metric: took 8.566311765s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 20:23:59.662869   77223 api_server.go:52] waiting for apiserver process to appear ...
	I1213 20:23:59.662936   77223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:59.685691   77223 api_server.go:72] duration metric: took 8.890275631s to wait for apiserver process to appear ...
	I1213 20:23:59.685722   77223 api_server.go:88] waiting for apiserver healthz status ...
	I1213 20:23:59.685743   77223 api_server.go:253] Checking apiserver healthz at https://192.168.61.128:8443/healthz ...
	I1213 20:23:59.692539   77223 api_server.go:279] https://192.168.61.128:8443/healthz returned 200:
	ok
	I1213 20:23:59.694289   77223 api_server.go:141] control plane version: v1.31.2
	I1213 20:23:59.694317   77223 api_server.go:131] duration metric: took 8.58708ms to wait for apiserver health ...
	I1213 20:23:59.694327   77223 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 20:23:59.703648   77223 system_pods.go:59] 9 kube-system pods found
	I1213 20:23:59.703682   77223 system_pods.go:61] "coredns-7c65d6cfc9-gksk2" [2099250f-c8ad-4c8d-b5da-9468b16e90de] Running
	I1213 20:23:59.703691   77223 system_pods.go:61] "coredns-7c65d6cfc9-gl527" [974ba38b-6931-4e46-aece-5b72bffab803] Running
	I1213 20:23:59.703697   77223 system_pods.go:61] "etcd-no-preload-475934" [725feb76-9ad0-4640-ba25-2eae13596bba] Running
	I1213 20:23:59.703703   77223 system_pods.go:61] "kube-apiserver-no-preload-475934" [56776240-3677-4af6-bba4-dd1a261d5560] Running
	I1213 20:23:59.703711   77223 system_pods.go:61] "kube-controller-manager-no-preload-475934" [86f1bb7e-ee5d-441d-a38a-1a0f74fec6e4] Running
	I1213 20:23:59.703716   77223 system_pods.go:61] "kube-proxy-s5k7k" [db2eddc8-a260-42e5-8590-3475eb56a54b] Running
	I1213 20:23:59.703721   77223 system_pods.go:61] "kube-scheduler-no-preload-475934" [5e10b82e-e677-4f7d-bbd5-6e494b0796af] Running
	I1213 20:23:59.703732   77223 system_pods.go:61] "metrics-server-6867b74b74-l2mch" [b7c19469-9a0d-4136-beed-c2c309e610cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 20:23:59.703742   77223 system_pods.go:61] "storage-provisioner" [1bfd0b04-9a54-4a03-8e93-ffe4566108a1] Running
	I1213 20:23:59.703752   77223 system_pods.go:74] duration metric: took 9.418447ms to wait for pod list to return data ...
	I1213 20:23:59.703761   77223 default_sa.go:34] waiting for default service account to be created ...
	I1213 20:23:59.713584   77223 default_sa.go:45] found service account: "default"
	I1213 20:23:59.713610   77223 default_sa.go:55] duration metric: took 9.841478ms for default service account to be created ...
	I1213 20:23:59.713621   77223 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 20:23:59.720207   77223 system_pods.go:86] 9 kube-system pods found
	I1213 20:23:59.720230   77223 system_pods.go:89] "coredns-7c65d6cfc9-gksk2" [2099250f-c8ad-4c8d-b5da-9468b16e90de] Running
	I1213 20:23:59.720236   77223 system_pods.go:89] "coredns-7c65d6cfc9-gl527" [974ba38b-6931-4e46-aece-5b72bffab803] Running
	I1213 20:23:59.720240   77223 system_pods.go:89] "etcd-no-preload-475934" [725feb76-9ad0-4640-ba25-2eae13596bba] Running
	I1213 20:23:59.720244   77223 system_pods.go:89] "kube-apiserver-no-preload-475934" [56776240-3677-4af6-bba4-dd1a261d5560] Running
	I1213 20:23:59.720247   77223 system_pods.go:89] "kube-controller-manager-no-preload-475934" [86f1bb7e-ee5d-441d-a38a-1a0f74fec6e4] Running
	I1213 20:23:59.720251   77223 system_pods.go:89] "kube-proxy-s5k7k" [db2eddc8-a260-42e5-8590-3475eb56a54b] Running
	I1213 20:23:59.720255   77223 system_pods.go:89] "kube-scheduler-no-preload-475934" [5e10b82e-e677-4f7d-bbd5-6e494b0796af] Running
	I1213 20:23:59.720268   77223 system_pods.go:89] "metrics-server-6867b74b74-l2mch" [b7c19469-9a0d-4136-beed-c2c309e610cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 20:23:59.720272   77223 system_pods.go:89] "storage-provisioner" [1bfd0b04-9a54-4a03-8e93-ffe4566108a1] Running
	I1213 20:23:59.720279   77223 system_pods.go:126] duration metric: took 6.653114ms to wait for k8s-apps to be running ...
	I1213 20:23:59.720288   77223 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 20:23:59.720325   77223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:23:59.743000   77223 system_svc.go:56] duration metric: took 22.70094ms WaitForService to wait for kubelet
	I1213 20:23:59.743035   77223 kubeadm.go:582] duration metric: took 8.947624109s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 20:23:59.743057   77223 node_conditions.go:102] verifying NodePressure condition ...
	I1213 20:23:59.747281   77223 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 20:23:59.747321   77223 node_conditions.go:123] node cpu capacity is 2
	I1213 20:23:59.747337   77223 node_conditions.go:105] duration metric: took 4.273745ms to run NodePressure ...
	I1213 20:23:59.747353   77223 start.go:241] waiting for startup goroutines ...
	I1213 20:23:59.747363   77223 start.go:246] waiting for cluster config update ...
	I1213 20:23:59.747380   77223 start.go:255] writing updated cluster config ...
	I1213 20:23:59.747732   77223 ssh_runner.go:195] Run: rm -f paused
	I1213 20:23:59.820239   77223 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1213 20:23:59.821954   77223 out.go:177] * Done! kubectl is now configured to use "no-preload-475934" cluster and "default" namespace by default
	I1213 20:23:58.293751   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.294127   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 20:23:58.294142   77510 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 20:23:58.294178   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHHostname
	I1213 20:23:58.294280   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:ab:46", ip: ""} in network mk-default-k8s-diff-port-355668: {Iface:virbr1 ExpiryTime:2024-12-13 21:18:42 +0000 UTC Type:0 Mac:52:54:00:22:ab:46 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:default-k8s-diff-port-355668 Clientid:01:52:54:00:22:ab:46}
	I1213 20:23:58.294376   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined IP address 192.168.39.233 and MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.294629   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHPort
	I1213 20:23:58.294779   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHKeyPath
	I1213 20:23:58.294932   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHUsername
	I1213 20:23:58.295104   77510 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/default-k8s-diff-port-355668/id_rsa Username:docker}
	I1213 20:23:58.296706   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.297082   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:ab:46", ip: ""} in network mk-default-k8s-diff-port-355668: {Iface:virbr1 ExpiryTime:2024-12-13 21:18:42 +0000 UTC Type:0 Mac:52:54:00:22:ab:46 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:default-k8s-diff-port-355668 Clientid:01:52:54:00:22:ab:46}
	I1213 20:23:58.297117   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined IP address 192.168.39.233 and MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.297252   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHPort
	I1213 20:23:58.297422   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHKeyPath
	I1213 20:23:58.297574   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHUsername
	I1213 20:23:58.297699   77510 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/default-k8s-diff-port-355668/id_rsa Username:docker}
	I1213 20:23:58.298144   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.298502   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:ab:46", ip: ""} in network mk-default-k8s-diff-port-355668: {Iface:virbr1 ExpiryTime:2024-12-13 21:18:42 +0000 UTC Type:0 Mac:52:54:00:22:ab:46 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:default-k8s-diff-port-355668 Clientid:01:52:54:00:22:ab:46}
	I1213 20:23:58.298608   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined IP address 192.168.39.233 and MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.298673   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHPort
	I1213 20:23:58.298828   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHKeyPath
	I1213 20:23:58.299124   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHUsername
	I1213 20:23:58.299253   77510 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/default-k8s-diff-port-355668/id_rsa Username:docker}
	I1213 20:23:58.437780   77510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 20:23:58.458240   77510 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-355668" to be "Ready" ...
	I1213 20:23:58.495039   77510 node_ready.go:49] node "default-k8s-diff-port-355668" has status "Ready":"True"
	I1213 20:23:58.495124   77510 node_ready.go:38] duration metric: took 36.851728ms for node "default-k8s-diff-port-355668" to be "Ready" ...
	I1213 20:23:58.495141   77510 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 20:23:58.506404   77510 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kl689" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:58.548351   77510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 20:23:58.548377   77510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 20:23:58.570739   77510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 20:23:58.570762   77510 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 20:23:58.591010   77510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:23:58.598380   77510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 20:23:58.598406   77510 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 20:23:58.612228   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 20:23:58.612255   77510 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 20:23:58.616620   77510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 20:23:58.643759   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 20:23:58.643785   77510 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 20:23:58.657745   77510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 20:23:58.696453   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 20:23:58.696548   77510 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 20:23:58.760682   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 20:23:58.760710   77510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 20:23:58.851490   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 20:23:58.851514   77510 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 20:23:58.930302   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 20:23:58.930330   77510 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 20:23:58.991218   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 20:23:58.991261   77510 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 20:23:59.066139   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 20:23:59.066169   77510 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 20:23:59.102453   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 20:23:59.102479   77510 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 20:23:59.182801   77510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 20:23:59.970886   77510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.379839482s)
	I1213 20:23:59.970942   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:59.970957   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:23:59.971058   77510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.354409285s)
	I1213 20:23:59.971081   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:59.971091   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:23:59.971200   77510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.313427588s)
	I1213 20:23:59.971217   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:59.971227   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:23:59.971296   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | Closing plugin on server side
	I1213 20:23:59.971333   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:59.971340   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:59.971348   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:59.971355   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:23:59.971564   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:59.971577   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:59.971587   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:59.971594   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:23:59.971800   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | Closing plugin on server side
	I1213 20:23:59.971830   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | Closing plugin on server side
	I1213 20:23:59.971836   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:59.971848   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:59.971861   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:59.971860   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:59.971873   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:59.971883   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:23:59.974115   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | Closing plugin on server side
	I1213 20:23:59.974153   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:59.974161   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:59.974168   77510 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-355668"
	I1213 20:23:59.974222   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | Closing plugin on server side
	I1213 20:23:59.974245   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:59.974255   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:00.001667   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:00.001698   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:24:00.002135   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:00.002164   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:00.002136   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | Closing plugin on server side
	I1213 20:24:00.532171   77510 pod_ready.go:103] pod "coredns-7c65d6cfc9-kl689" in "kube-system" namespace has status "Ready":"False"
	I1213 20:24:01.475325   77510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.292470675s)
	I1213 20:24:01.475377   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:01.475399   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:24:01.475719   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:01.475733   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:01.475742   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:01.475750   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:24:01.475977   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:01.475990   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:01.478505   77510 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-355668 addons enable metrics-server
	
	I1213 20:24:01.479872   77510 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1213 20:23:58.270264   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.270365   79820 main.go:141] libmachine: (newest-cni-535459) found domain IP: 192.168.50.11
	I1213 20:23:58.270394   79820 main.go:141] libmachine: (newest-cni-535459) reserving static IP address...
	I1213 20:23:58.270420   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has current primary IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.271183   79820 main.go:141] libmachine: (newest-cni-535459) reserved static IP address 192.168.50.11 for domain newest-cni-535459
	I1213 20:23:58.271227   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "newest-cni-535459", mac: "52:54:00:7d:17:89", ip: "192.168.50.11"} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.271247   79820 main.go:141] libmachine: (newest-cni-535459) waiting for SSH...
	I1213 20:23:58.271278   79820 main.go:141] libmachine: (newest-cni-535459) DBG | skip adding static IP to network mk-newest-cni-535459 - found existing host DHCP lease matching {name: "newest-cni-535459", mac: "52:54:00:7d:17:89", ip: "192.168.50.11"}
	I1213 20:23:58.271286   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Getting to WaitForSSH function...
	I1213 20:23:58.277440   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.283137   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.283166   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.283641   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Using SSH client type: external
	I1213 20:23:58.283664   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Using SSH private key: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa (-rw-------)
	I1213 20:23:58.283702   79820 main.go:141] libmachine: (newest-cni-535459) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 20:23:58.283712   79820 main.go:141] libmachine: (newest-cni-535459) DBG | About to run SSH command:
	I1213 20:23:58.283724   79820 main.go:141] libmachine: (newest-cni-535459) DBG | exit 0
	I1213 20:23:58.431895   79820 main.go:141] libmachine: (newest-cni-535459) DBG | SSH cmd err, output: <nil>: 
	I1213 20:23:58.432276   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetConfigRaw
	I1213 20:23:58.433028   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetIP
	I1213 20:23:58.436521   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.436848   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.436875   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.437192   79820 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/config.json ...
	I1213 20:23:58.437455   79820 machine.go:93] provisionDockerMachine start ...
	I1213 20:23:58.437480   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:58.437689   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:58.440580   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.441089   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.441132   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.441277   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:58.441491   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:58.441620   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:58.441769   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:58.441918   79820 main.go:141] libmachine: Using SSH client type: native
	I1213 20:23:58.442164   79820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I1213 20:23:58.442183   79820 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 20:23:58.559163   79820 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 20:23:58.559200   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetMachineName
	I1213 20:23:58.559468   79820 buildroot.go:166] provisioning hostname "newest-cni-535459"
	I1213 20:23:58.559498   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetMachineName
	I1213 20:23:58.559678   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:58.562818   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.563374   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.563402   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.563582   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:58.563766   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:58.563919   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:58.564082   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:58.564268   79820 main.go:141] libmachine: Using SSH client type: native
	I1213 20:23:58.564508   79820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I1213 20:23:58.564530   79820 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-535459 && echo "newest-cni-535459" | sudo tee /etc/hostname
	I1213 20:23:58.696712   79820 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-535459
	
	I1213 20:23:58.696798   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:58.700359   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.700838   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.700864   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.701015   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:58.701205   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:58.701411   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:58.701579   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:58.701764   79820 main.go:141] libmachine: Using SSH client type: native
	I1213 20:23:58.702008   79820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I1213 20:23:58.702036   79820 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-535459' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-535459/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-535459' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 20:23:58.827902   79820 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 20:23:58.827937   79820 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20090-12353/.minikube CaCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20090-12353/.minikube}
	I1213 20:23:58.827979   79820 buildroot.go:174] setting up certificates
	I1213 20:23:58.827999   79820 provision.go:84] configureAuth start
	I1213 20:23:58.828016   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetMachineName
	I1213 20:23:58.828306   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetIP
	I1213 20:23:58.831180   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.831550   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.831588   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.831736   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:58.833951   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.834312   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.834355   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.834505   79820 provision.go:143] copyHostCerts
	I1213 20:23:58.834581   79820 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem, removing ...
	I1213 20:23:58.834598   79820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem
	I1213 20:23:58.834689   79820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem (1082 bytes)
	I1213 20:23:58.834879   79820 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem, removing ...
	I1213 20:23:58.834898   79820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem
	I1213 20:23:58.834948   79820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem (1123 bytes)
	I1213 20:23:58.835048   79820 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem, removing ...
	I1213 20:23:58.835067   79820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem
	I1213 20:23:58.835107   79820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem (1675 bytes)
	I1213 20:23:58.835195   79820 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem org=jenkins.newest-cni-535459 san=[127.0.0.1 192.168.50.11 localhost minikube newest-cni-535459]
	I1213 20:23:59.091370   79820 provision.go:177] copyRemoteCerts
	I1213 20:23:59.091432   79820 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 20:23:59.091482   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:59.094717   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.095146   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.095177   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.095370   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:59.095547   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.095707   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:59.095832   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:23:59.177442   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 20:23:59.202054   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 20:23:59.228527   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 20:23:59.254148   79820 provision.go:87] duration metric: took 426.134893ms to configureAuth
	I1213 20:23:59.254187   79820 buildroot.go:189] setting minikube options for container-runtime
	I1213 20:23:59.254402   79820 config.go:182] Loaded profile config "newest-cni-535459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:23:59.254467   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:59.257684   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.258113   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.258139   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.258369   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:59.258575   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.258743   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.258913   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:59.259101   79820 main.go:141] libmachine: Using SSH client type: native
	I1213 20:23:59.259355   79820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I1213 20:23:59.259378   79820 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 20:23:59.495940   79820 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 20:23:59.495974   79820 machine.go:96] duration metric: took 1.058500785s to provisionDockerMachine
	I1213 20:23:59.495990   79820 start.go:293] postStartSetup for "newest-cni-535459" (driver="kvm2")
	I1213 20:23:59.496006   79820 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 20:23:59.496029   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:59.496330   79820 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 20:23:59.496359   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:59.499780   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.500193   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.500234   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.500450   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:59.500642   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.500813   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:59.500918   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:23:59.582993   79820 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 20:23:59.588260   79820 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 20:23:59.588297   79820 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-12353/.minikube/addons for local assets ...
	I1213 20:23:59.588362   79820 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-12353/.minikube/files for local assets ...
	I1213 20:23:59.588431   79820 filesync.go:149] local asset: /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem -> 195442.pem in /etc/ssl/certs
	I1213 20:23:59.588562   79820 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 20:23:59.601947   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem --> /etc/ssl/certs/195442.pem (1708 bytes)
	I1213 20:23:59.631405   79820 start.go:296] duration metric: took 135.398616ms for postStartSetup
	I1213 20:23:59.631454   79820 fix.go:56] duration metric: took 21.330020412s for fixHost
	I1213 20:23:59.631480   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:59.634516   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.634952   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.635000   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.635198   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:59.635387   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.635543   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.635691   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:59.635840   79820 main.go:141] libmachine: Using SSH client type: native
	I1213 20:23:59.636070   79820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I1213 20:23:59.636084   79820 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 20:23:59.749289   79820 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734121439.718006490
	
	I1213 20:23:59.749313   79820 fix.go:216] guest clock: 1734121439.718006490
	I1213 20:23:59.749322   79820 fix.go:229] Guest: 2024-12-13 20:23:59.71800649 +0000 UTC Remote: 2024-12-13 20:23:59.631459768 +0000 UTC m=+21.470518452 (delta=86.546722ms)
	I1213 20:23:59.749347   79820 fix.go:200] guest clock delta is within tolerance: 86.546722ms
	I1213 20:23:59.749361   79820 start.go:83] releasing machines lock for "newest-cni-535459", held for 21.447944205s
	I1213 20:23:59.749385   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:59.749655   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetIP
	I1213 20:23:59.752968   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.753402   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.753426   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.753606   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:59.754075   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:59.754269   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:59.754364   79820 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 20:23:59.754400   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:59.754690   79820 ssh_runner.go:195] Run: cat /version.json
	I1213 20:23:59.754714   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:59.757878   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.767628   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.767685   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.768022   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.768079   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:59.768303   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.768325   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.768458   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:59.768631   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.768681   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.768814   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:59.768849   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:59.769016   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:23:59.769027   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:23:59.888086   79820 ssh_runner.go:195] Run: systemctl --version
	I1213 20:23:59.899362   79820 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 20:24:00.063446   79820 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 20:24:00.072249   79820 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 20:24:00.072336   79820 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 20:24:00.093748   79820 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 20:24:00.093780   79820 start.go:495] detecting cgroup driver to use...
	I1213 20:24:00.093849   79820 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 20:24:00.117356   79820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 20:24:00.135377   79820 docker.go:217] disabling cri-docker service (if available) ...
	I1213 20:24:00.135437   79820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 20:24:00.155178   79820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 20:24:00.171890   79820 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 20:24:00.321669   79820 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 20:24:00.533366   79820 docker.go:233] disabling docker service ...
	I1213 20:24:00.533432   79820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 20:24:00.551511   79820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 20:24:00.569283   79820 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 20:24:00.748948   79820 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 20:24:00.924287   79820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 20:24:00.938559   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 20:24:00.958306   79820 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1213 20:24:00.958394   79820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:00.968592   79820 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 20:24:00.968667   79820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:00.979213   79820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:00.993825   79820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:01.004141   79820 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 20:24:01.015195   79820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:01.025731   79820 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:01.048789   79820 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:01.062542   79820 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 20:24:01.074137   79820 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 20:24:01.074218   79820 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 20:24:01.091233   79820 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 20:24:01.103721   79820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:24:01.274965   79820 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 20:24:01.400580   79820 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 20:24:01.400700   79820 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 20:24:01.406514   79820 start.go:563] Will wait 60s for crictl version
	I1213 20:24:01.406581   79820 ssh_runner.go:195] Run: which crictl
	I1213 20:24:01.411798   79820 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 20:24:01.463581   79820 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 20:24:01.463672   79820 ssh_runner.go:195] Run: crio --version
	I1213 20:24:01.503505   79820 ssh_runner.go:195] Run: crio --version
	I1213 20:24:01.545804   79820 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1213 20:24:01.547133   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetIP
	I1213 20:24:01.550717   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:01.551167   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:24:01.551198   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:01.551399   79820 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1213 20:24:01.555655   79820 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 20:24:01.574604   79820 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 20:23:57.815345   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:57.830459   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:57.830536   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:57.867421   78367 cri.go:89] found id: ""
	I1213 20:23:57.867450   78367 logs.go:282] 0 containers: []
	W1213 20:23:57.867462   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:57.867470   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:57.867528   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:57.904972   78367 cri.go:89] found id: ""
	I1213 20:23:57.905010   78367 logs.go:282] 0 containers: []
	W1213 20:23:57.905021   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:57.905029   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:57.905092   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:57.951889   78367 cri.go:89] found id: ""
	I1213 20:23:57.951916   78367 logs.go:282] 0 containers: []
	W1213 20:23:57.951928   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:57.951936   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:57.952010   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:57.998664   78367 cri.go:89] found id: ""
	I1213 20:23:57.998697   78367 logs.go:282] 0 containers: []
	W1213 20:23:57.998708   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:57.998715   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:57.998772   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:58.047566   78367 cri.go:89] found id: ""
	I1213 20:23:58.047597   78367 logs.go:282] 0 containers: []
	W1213 20:23:58.047608   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:58.047625   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:58.047686   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:58.082590   78367 cri.go:89] found id: ""
	I1213 20:23:58.082619   78367 logs.go:282] 0 containers: []
	W1213 20:23:58.082629   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:58.082637   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:58.082694   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:58.125035   78367 cri.go:89] found id: ""
	I1213 20:23:58.125071   78367 logs.go:282] 0 containers: []
	W1213 20:23:58.125080   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:58.125087   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:58.125147   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:58.168019   78367 cri.go:89] found id: ""
	I1213 20:23:58.168049   78367 logs.go:282] 0 containers: []
	W1213 20:23:58.168060   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:58.168078   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:58.168092   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:58.268179   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:58.268212   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:58.303166   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:58.303192   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:58.393172   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:58.393206   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:58.393220   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:58.489198   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:58.489230   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:01.033661   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:01.047673   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:01.047747   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:01.089498   78367 cri.go:89] found id: ""
	I1213 20:24:01.089526   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.089536   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:01.089543   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:01.089605   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:01.130215   78367 cri.go:89] found id: ""
	I1213 20:24:01.130245   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.130256   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:01.130264   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:01.130326   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:01.177064   78367 cri.go:89] found id: ""
	I1213 20:24:01.177102   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.177119   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:01.177126   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:01.177187   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:01.231277   78367 cri.go:89] found id: ""
	I1213 20:24:01.231312   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.231324   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:01.231332   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:01.231395   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:01.277419   78367 cri.go:89] found id: ""
	I1213 20:24:01.277446   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.277456   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:01.277463   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:01.277519   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:01.322970   78367 cri.go:89] found id: ""
	I1213 20:24:01.322996   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.323007   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:01.323017   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:01.323087   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:01.369554   78367 cri.go:89] found id: ""
	I1213 20:24:01.369585   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.369596   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:01.369603   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:01.369661   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:01.411927   78367 cri.go:89] found id: ""
	I1213 20:24:01.411957   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.411967   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:01.411987   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:01.412005   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:01.486061   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:01.486097   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:01.500644   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:01.500673   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:01.578266   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:01.578283   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:01.578293   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:01.575794   79820 kubeadm.go:883] updating cluster {Name:newest-cni-535459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:newest-cni-535459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 20:24:01.575963   79820 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 20:24:01.576035   79820 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 20:24:01.617299   79820 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1213 20:24:01.617414   79820 ssh_runner.go:195] Run: which lz4
	I1213 20:24:01.621480   79820 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 20:24:01.625517   79820 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 20:24:01.625563   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1213 20:24:03.034691   79820 crio.go:462] duration metric: took 1.413259837s to copy over tarball
	I1213 20:24:03.034768   79820 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 20:24:01.481491   77510 addons.go:510] duration metric: took 3.281543559s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1213 20:24:02.601672   77510 pod_ready.go:103] pod "coredns-7c65d6cfc9-kl689" in "kube-system" namespace has status "Ready":"False"
	I1213 20:24:01.687325   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:01.687362   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:04.239043   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:04.252218   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:04.252292   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:04.294778   78367 cri.go:89] found id: ""
	I1213 20:24:04.294810   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.294820   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:04.294828   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:04.294910   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:04.339012   78367 cri.go:89] found id: ""
	I1213 20:24:04.339049   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.339061   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:04.339069   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:04.339134   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:04.391028   78367 cri.go:89] found id: ""
	I1213 20:24:04.391064   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.391076   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:04.391084   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:04.391147   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:04.436260   78367 cri.go:89] found id: ""
	I1213 20:24:04.436291   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.436308   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:04.436316   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:04.436372   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:04.485225   78367 cri.go:89] found id: ""
	I1213 20:24:04.485255   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.485274   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:04.485283   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:04.485347   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:04.527198   78367 cri.go:89] found id: ""
	I1213 20:24:04.527228   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.527239   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:04.527247   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:04.527306   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:04.567885   78367 cri.go:89] found id: ""
	I1213 20:24:04.567915   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.567926   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:04.567934   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:04.567984   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:04.608495   78367 cri.go:89] found id: ""
	I1213 20:24:04.608535   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.608546   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:04.608557   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:04.608571   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:04.691701   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:04.691735   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:04.739203   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:04.739236   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:04.815994   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:04.816050   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:04.851237   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:04.851277   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:04.994736   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:05.429979   79820 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.395156779s)
	I1213 20:24:05.430008   79820 crio.go:469] duration metric: took 2.395289211s to extract the tarball
	I1213 20:24:05.430017   79820 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 20:24:05.486315   79820 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 20:24:05.546704   79820 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 20:24:05.546729   79820 cache_images.go:84] Images are preloaded, skipping loading
	I1213 20:24:05.546737   79820 kubeadm.go:934] updating node { 192.168.50.11 8443 v1.31.2 crio true true} ...
	I1213 20:24:05.546882   79820 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-535459 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-535459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 20:24:05.546997   79820 ssh_runner.go:195] Run: crio config
	I1213 20:24:05.617708   79820 cni.go:84] Creating CNI manager for ""
	I1213 20:24:05.617734   79820 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:24:05.617757   79820 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1213 20:24:05.617784   79820 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.11 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-535459 NodeName:newest-cni-535459 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 20:24:05.617925   79820 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-535459"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.11"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.11"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 20:24:05.618013   79820 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 20:24:05.631181   79820 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 20:24:05.631261   79820 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 20:24:05.642971   79820 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1213 20:24:05.662761   79820 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 20:24:05.682676   79820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I1213 20:24:05.706170   79820 ssh_runner.go:195] Run: grep 192.168.50.11	control-plane.minikube.internal$ /etc/hosts
	I1213 20:24:05.710946   79820 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 20:24:05.733291   79820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:24:05.878920   79820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 20:24:05.899390   79820 certs.go:68] Setting up /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459 for IP: 192.168.50.11
	I1213 20:24:05.899419   79820 certs.go:194] generating shared ca certs ...
	I1213 20:24:05.899438   79820 certs.go:226] acquiring lock for ca certs: {Name:mka8994129240986519f4b0ac41f1e4e27ada985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:24:05.899615   79820 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key
	I1213 20:24:05.899668   79820 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key
	I1213 20:24:05.899681   79820 certs.go:256] generating profile certs ...
	I1213 20:24:05.899786   79820 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/client.key
	I1213 20:24:05.899867   79820 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/apiserver.key.6c5572a8
	I1213 20:24:05.899919   79820 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/proxy-client.key
	I1213 20:24:05.900072   79820 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544.pem (1338 bytes)
	W1213 20:24:05.900112   79820 certs.go:480] ignoring /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544_empty.pem, impossibly tiny 0 bytes
	I1213 20:24:05.900124   79820 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem (1679 bytes)
	I1213 20:24:05.900156   79820 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem (1082 bytes)
	I1213 20:24:05.900187   79820 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem (1123 bytes)
	I1213 20:24:05.900215   79820 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem (1675 bytes)
	I1213 20:24:05.900269   79820 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem (1708 bytes)
	I1213 20:24:05.901141   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 20:24:05.939874   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 20:24:05.978129   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 20:24:06.014027   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 20:24:06.054231   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 20:24:06.082617   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 20:24:06.113846   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 20:24:06.160961   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 20:24:06.186616   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544.pem --> /usr/share/ca-certificates/19544.pem (1338 bytes)
	I1213 20:24:06.210814   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem --> /usr/share/ca-certificates/195442.pem (1708 bytes)
	I1213 20:24:06.235875   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 20:24:06.268351   79820 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 20:24:06.289062   79820 ssh_runner.go:195] Run: openssl version
	I1213 20:24:06.295624   79820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19544.pem && ln -fs /usr/share/ca-certificates/19544.pem /etc/ssl/certs/19544.pem"
	I1213 20:24:06.309685   79820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19544.pem
	I1213 20:24:06.314119   79820 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 19:13 /usr/share/ca-certificates/19544.pem
	I1213 20:24:06.314222   79820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19544.pem
	I1213 20:24:06.320247   79820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19544.pem /etc/ssl/certs/51391683.0"
	I1213 20:24:06.331949   79820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/195442.pem && ln -fs /usr/share/ca-certificates/195442.pem /etc/ssl/certs/195442.pem"
	I1213 20:24:06.343731   79820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/195442.pem
	I1213 20:24:06.348018   79820 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 19:13 /usr/share/ca-certificates/195442.pem
	I1213 20:24:06.348081   79820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/195442.pem
	I1213 20:24:06.353554   79820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/195442.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 20:24:06.366858   79820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 20:24:06.377728   79820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:24:06.382326   79820 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:24:06.382401   79820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:24:06.390103   79820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 20:24:06.404838   79820 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 20:24:06.410770   79820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 20:24:06.422025   79820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 20:24:06.431833   79820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 20:24:06.438647   79820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 20:24:06.444814   79820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 20:24:06.452219   79820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 20:24:06.458272   79820 kubeadm.go:392] StartCluster: {Name:newest-cni-535459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:newest-cni-535459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil>
ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:24:06.458424   79820 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 20:24:06.458491   79820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 20:24:06.506732   79820 cri.go:89] found id: ""
	I1213 20:24:06.506810   79820 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 20:24:06.518343   79820 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1213 20:24:06.518376   79820 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1213 20:24:06.518430   79820 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 20:24:06.531209   79820 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 20:24:06.532070   79820 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-535459" does not appear in /home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:24:06.532572   79820 kubeconfig.go:62] /home/jenkins/minikube-integration/20090-12353/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-535459" cluster setting kubeconfig missing "newest-cni-535459" context setting]
	I1213 20:24:06.533290   79820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/kubeconfig: {Name:mkeeacf16d2513309766df13b67a96dd252bc4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:24:06.539651   79820 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 20:24:06.550828   79820 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.11
	I1213 20:24:06.550886   79820 kubeadm.go:1160] stopping kube-system containers ...
	I1213 20:24:06.550902   79820 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 20:24:06.550970   79820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 20:24:06.612618   79820 cri.go:89] found id: ""
	I1213 20:24:06.612750   79820 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 20:24:06.636007   79820 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:24:06.648489   79820 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:24:06.648512   79820 kubeadm.go:157] found existing configuration files:
	
	I1213 20:24:06.648563   79820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:24:06.660079   79820 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:24:06.660154   79820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:24:06.672333   79820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:24:06.683617   79820 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:24:06.683683   79820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:24:06.695818   79820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:24:06.706996   79820 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:24:06.707073   79820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:24:06.718672   79820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:24:06.729768   79820 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:24:06.729838   79820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:24:06.742002   79820 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 20:24:06.754184   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:24:07.010247   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:24:08.064932   79820 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.054652155s)
	I1213 20:24:08.064963   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:24:05.014076   77510 pod_ready.go:103] pod "coredns-7c65d6cfc9-kl689" in "kube-system" namespace has status "Ready":"False"
	I1213 20:24:06.021280   77510 pod_ready.go:93] pod "coredns-7c65d6cfc9-kl689" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:06.021310   77510 pod_ready.go:82] duration metric: took 7.514875372s for pod "coredns-7c65d6cfc9-kl689" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.021326   77510 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sk656" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.035861   77510 pod_ready.go:93] pod "coredns-7c65d6cfc9-sk656" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:06.035888   77510 pod_ready.go:82] duration metric: took 14.555021ms for pod "coredns-7c65d6cfc9-sk656" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.035900   77510 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.979006   77510 pod_ready.go:93] pod "etcd-default-k8s-diff-port-355668" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:06.979035   77510 pod_ready.go:82] duration metric: took 943.126351ms for pod "etcd-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.979049   77510 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.989635   77510 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-355668" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:06.989665   77510 pod_ready.go:82] duration metric: took 10.607567ms for pod "kube-apiserver-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.989677   77510 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.999141   77510 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-355668" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:06.999235   77510 pod_ready.go:82] duration metric: took 9.54585ms for pod "kube-controller-manager-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.999273   77510 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vjsf7" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:07.012290   77510 pod_ready.go:93] pod "kube-proxy-vjsf7" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:07.012314   77510 pod_ready.go:82] duration metric: took 13.004089ms for pod "kube-proxy-vjsf7" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:07.012327   77510 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:07.842063   77510 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-355668" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:07.842088   77510 pod_ready.go:82] duration metric: took 829.753011ms for pod "kube-scheduler-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:07.842099   77510 pod_ready.go:39] duration metric: took 9.346942648s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 20:24:07.842114   77510 api_server.go:52] waiting for apiserver process to appear ...
	I1213 20:24:07.842174   77510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:07.858079   77510 api_server.go:72] duration metric: took 9.658239691s to wait for apiserver process to appear ...
	I1213 20:24:07.858107   77510 api_server.go:88] waiting for apiserver healthz status ...
	I1213 20:24:07.858133   77510 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8444/healthz ...
	I1213 20:24:07.864534   77510 api_server.go:279] https://192.168.39.233:8444/healthz returned 200:
	ok
	I1213 20:24:07.865713   77510 api_server.go:141] control plane version: v1.31.2
	I1213 20:24:07.865744   77510 api_server.go:131] duration metric: took 7.628649ms to wait for apiserver health ...
	I1213 20:24:07.865758   77510 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 20:24:07.872447   77510 system_pods.go:59] 9 kube-system pods found
	I1213 20:24:07.872473   77510 system_pods.go:61] "coredns-7c65d6cfc9-kl689" [37fe56ef-63a9-4777-87e0-495d71277e32] Running
	I1213 20:24:07.872478   77510 system_pods.go:61] "coredns-7c65d6cfc9-sk656" [f3071d78-0070-472d-a0e2-2ce271a37c20] Running
	I1213 20:24:07.872482   77510 system_pods.go:61] "etcd-default-k8s-diff-port-355668" [c8d8c66d-39e0-4b19-a3f2-63d5a66e05e9] Running
	I1213 20:24:07.872486   77510 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-355668" [77c99748-98ec-47a4-85d2-a2908f14c29b] Running
	I1213 20:24:07.872490   77510 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-355668" [44186a3f-4958-4b0c-82ae-48959fad9597] Running
	I1213 20:24:07.872492   77510 system_pods.go:61] "kube-proxy-vjsf7" [fcb2ebe1-bd40-48e1-8f88-a667f9f07d15] Running
	I1213 20:24:07.872496   77510 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-355668" [8184208a-8949-4050-abac-4fcc78237ecf] Running
	I1213 20:24:07.872502   77510 system_pods.go:61] "metrics-server-6867b74b74-8qvr9" [e67db0c2-4c1a-46a1-a61f-103019663d57] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 20:24:07.872507   77510 system_pods.go:61] "storage-provisioner" [c9bd91ad-91f6-44ec-a845-f9accf0261e1] Running
	I1213 20:24:07.872518   77510 system_pods.go:74] duration metric: took 6.753419ms to wait for pod list to return data ...
	I1213 20:24:07.872532   77510 default_sa.go:34] waiting for default service account to be created ...
	I1213 20:24:07.875714   77510 default_sa.go:45] found service account: "default"
	I1213 20:24:07.875737   77510 default_sa.go:55] duration metric: took 3.19796ms for default service account to be created ...
	I1213 20:24:07.875748   77510 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 20:24:07.881451   77510 system_pods.go:86] 9 kube-system pods found
	I1213 20:24:07.881474   77510 system_pods.go:89] "coredns-7c65d6cfc9-kl689" [37fe56ef-63a9-4777-87e0-495d71277e32] Running
	I1213 20:24:07.881480   77510 system_pods.go:89] "coredns-7c65d6cfc9-sk656" [f3071d78-0070-472d-a0e2-2ce271a37c20] Running
	I1213 20:24:07.881484   77510 system_pods.go:89] "etcd-default-k8s-diff-port-355668" [c8d8c66d-39e0-4b19-a3f2-63d5a66e05e9] Running
	I1213 20:24:07.881489   77510 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-355668" [77c99748-98ec-47a4-85d2-a2908f14c29b] Running
	I1213 20:24:07.881493   77510 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-355668" [44186a3f-4958-4b0c-82ae-48959fad9597] Running
	I1213 20:24:07.881496   77510 system_pods.go:89] "kube-proxy-vjsf7" [fcb2ebe1-bd40-48e1-8f88-a667f9f07d15] Running
	I1213 20:24:07.881500   77510 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-355668" [8184208a-8949-4050-abac-4fcc78237ecf] Running
	I1213 20:24:07.881507   77510 system_pods.go:89] "metrics-server-6867b74b74-8qvr9" [e67db0c2-4c1a-46a1-a61f-103019663d57] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 20:24:07.881512   77510 system_pods.go:89] "storage-provisioner" [c9bd91ad-91f6-44ec-a845-f9accf0261e1] Running
	I1213 20:24:07.881519   77510 system_pods.go:126] duration metric: took 5.765842ms to wait for k8s-apps to be running ...
	I1213 20:24:07.881529   77510 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 20:24:07.881576   77510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:24:07.896968   77510 system_svc.go:56] duration metric: took 15.429735ms WaitForService to wait for kubelet
	I1213 20:24:07.897000   77510 kubeadm.go:582] duration metric: took 9.69716545s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 20:24:07.897023   77510 node_conditions.go:102] verifying NodePressure condition ...
	I1213 20:24:08.181918   77510 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 20:24:08.181946   77510 node_conditions.go:123] node cpu capacity is 2
	I1213 20:24:08.181959   77510 node_conditions.go:105] duration metric: took 284.930197ms to run NodePressure ...
	I1213 20:24:08.181973   77510 start.go:241] waiting for startup goroutines ...
	I1213 20:24:08.181983   77510 start.go:246] waiting for cluster config update ...
	I1213 20:24:08.181997   77510 start.go:255] writing updated cluster config ...
	I1213 20:24:08.257251   77510 ssh_runner.go:195] Run: rm -f paused
	I1213 20:24:08.310968   77510 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1213 20:24:08.560633   77510 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-355668" cluster and "default" namespace by default
	I1213 20:24:07.495945   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:07.509565   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:07.509640   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:07.548332   78367 cri.go:89] found id: ""
	I1213 20:24:07.548357   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.548365   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:07.548371   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:07.548417   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:07.585718   78367 cri.go:89] found id: ""
	I1213 20:24:07.585745   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.585752   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:07.585758   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:07.585816   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:07.620441   78367 cri.go:89] found id: ""
	I1213 20:24:07.620470   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.620478   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:07.620485   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:07.620543   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:07.654638   78367 cri.go:89] found id: ""
	I1213 20:24:07.654671   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.654682   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:07.654690   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:07.654752   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:07.690251   78367 cri.go:89] found id: ""
	I1213 20:24:07.690279   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.690289   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:07.690296   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:07.690362   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:07.733229   78367 cri.go:89] found id: ""
	I1213 20:24:07.733260   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.733268   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:07.733274   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:07.733325   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:07.767187   78367 cri.go:89] found id: ""
	I1213 20:24:07.767218   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.767229   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:07.767237   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:07.767309   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:07.803454   78367 cri.go:89] found id: ""
	I1213 20:24:07.803477   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.803485   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:07.803493   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:07.803504   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:07.884578   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:07.884602   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:07.884616   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:07.966402   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:07.966448   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:08.010335   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:08.010368   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:08.064614   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:08.064647   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:10.580540   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:10.597959   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:10.598030   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:10.667638   78367 cri.go:89] found id: ""
	I1213 20:24:10.667665   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.667675   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:10.667683   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:10.667739   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:10.728894   78367 cri.go:89] found id: ""
	I1213 20:24:10.728918   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.728929   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:10.728936   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:10.728992   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:10.771954   78367 cri.go:89] found id: ""
	I1213 20:24:10.771991   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.772001   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:10.772009   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:10.772067   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:10.818154   78367 cri.go:89] found id: ""
	I1213 20:24:10.818181   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.818188   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:10.818193   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:10.818240   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:10.858974   78367 cri.go:89] found id: ""
	I1213 20:24:10.859003   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.859014   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:10.859021   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:10.859086   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:10.908481   78367 cri.go:89] found id: ""
	I1213 20:24:10.908511   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.908524   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:10.908532   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:10.908604   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:10.944951   78367 cri.go:89] found id: ""
	I1213 20:24:10.944979   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.944987   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:10.945001   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:10.945064   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:10.979563   78367 cri.go:89] found id: ""
	I1213 20:24:10.979588   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.979596   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:10.979604   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:10.979616   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:11.052472   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:11.052507   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:11.068916   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:11.068947   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:11.146800   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:11.146826   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:11.146839   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:11.248307   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:11.248347   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:08.321808   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:24:08.374083   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:24:08.441322   79820 api_server.go:52] waiting for apiserver process to appear ...
	I1213 20:24:08.441414   79820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:08.942600   79820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:09.441659   79820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:09.480026   79820 api_server.go:72] duration metric: took 1.038702713s to wait for apiserver process to appear ...
	I1213 20:24:09.480059   79820 api_server.go:88] waiting for apiserver healthz status ...
	I1213 20:24:09.480084   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:09.480678   79820 api_server.go:269] stopped: https://192.168.50.11:8443/healthz: Get "https://192.168.50.11:8443/healthz": dial tcp 192.168.50.11:8443: connect: connection refused
	I1213 20:24:09.980257   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:12.178320   79820 api_server.go:279] https://192.168.50.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 20:24:12.178365   79820 api_server.go:103] status: https://192.168.50.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 20:24:12.178382   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:12.185253   79820 api_server.go:279] https://192.168.50.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 20:24:12.185281   79820 api_server.go:103] status: https://192.168.50.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 20:24:12.480680   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:12.491410   79820 api_server.go:279] https://192.168.50.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 20:24:12.491444   79820 api_server.go:103] status: https://192.168.50.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 20:24:12.981159   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:12.986141   79820 api_server.go:279] https://192.168.50.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 20:24:12.986171   79820 api_server.go:103] status: https://192.168.50.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 20:24:13.480205   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:13.485225   79820 api_server.go:279] https://192.168.50.11:8443/healthz returned 200:
	ok
	I1213 20:24:13.494430   79820 api_server.go:141] control plane version: v1.31.2
	I1213 20:24:13.494452   79820 api_server.go:131] duration metric: took 4.014386318s to wait for apiserver health ...
	I1213 20:24:13.494460   79820 cni.go:84] Creating CNI manager for ""
	I1213 20:24:13.494465   79820 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:24:13.496012   79820 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 20:24:13.497376   79820 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 20:24:13.511144   79820 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 20:24:13.533969   79820 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 20:24:13.556295   79820 system_pods.go:59] 8 kube-system pods found
	I1213 20:24:13.556338   79820 system_pods.go:61] "coredns-7c65d6cfc9-q6mqc" [9f65c257-99b6-466f-91ae-9676625eb834] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 20:24:13.556349   79820 system_pods.go:61] "etcd-newest-cni-535459" [b491d2e0-2d34-4f0b-abf3-1d212ba9f422] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 20:24:13.556359   79820 system_pods.go:61] "kube-apiserver-newest-cni-535459" [6aeeeaed-b2ec-4c7d-ac94-215b57c0bd45] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 20:24:13.556368   79820 system_pods.go:61] "kube-controller-manager-newest-cni-535459" [51cd3848-17b3-493a-87db-d16192d55533] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 20:24:13.556384   79820 system_pods.go:61] "kube-proxy-msh9m" [e538f898-3a04-4e6f-bbf2-fc7fb13b43f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 20:24:13.556397   79820 system_pods.go:61] "kube-scheduler-newest-cni-535459" [90d47a04-6a40-461b-a19e-cc3d8a7b92ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 20:24:13.556406   79820 system_pods.go:61] "metrics-server-6867b74b74-29j2k" [cb907d37-be2a-4579-ba77-9c5add245ec1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 20:24:13.556420   79820 system_pods.go:61] "storage-provisioner" [de0598b8-996f-4307-b6c8-e81fa10d6f47] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 20:24:13.556432   79820 system_pods.go:74] duration metric: took 22.427974ms to wait for pod list to return data ...
	I1213 20:24:13.556444   79820 node_conditions.go:102] verifying NodePressure condition ...
	I1213 20:24:13.563220   79820 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 20:24:13.563264   79820 node_conditions.go:123] node cpu capacity is 2
	I1213 20:24:13.563277   79820 node_conditions.go:105] duration metric: took 6.825662ms to run NodePressure ...
	I1213 20:24:13.563301   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:24:13.855672   79820 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 20:24:13.870068   79820 ops.go:34] apiserver oom_adj: -16
	I1213 20:24:13.870105   79820 kubeadm.go:597] duration metric: took 7.351714184s to restartPrimaryControlPlane
	I1213 20:24:13.870119   79820 kubeadm.go:394] duration metric: took 7.411858052s to StartCluster
	I1213 20:24:13.870140   79820 settings.go:142] acquiring lock: {Name:mkc90da34b53323b31b6e69f8fab5ad7b1bdb254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:24:13.870220   79820 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:24:13.871661   79820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/kubeconfig: {Name:mkeeacf16d2513309766df13b67a96dd252bc4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:24:13.871898   79820 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 20:24:13.871961   79820 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 20:24:13.872063   79820 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-535459"
	I1213 20:24:13.872081   79820 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-535459"
	W1213 20:24:13.872093   79820 addons.go:243] addon storage-provisioner should already be in state true
	I1213 20:24:13.872124   79820 host.go:66] Checking if "newest-cni-535459" exists ...
	I1213 20:24:13.872109   79820 addons.go:69] Setting default-storageclass=true in profile "newest-cni-535459"
	I1213 20:24:13.872135   79820 addons.go:69] Setting metrics-server=true in profile "newest-cni-535459"
	I1213 20:24:13.872156   79820 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-535459"
	I1213 20:24:13.872143   79820 addons.go:69] Setting dashboard=true in profile "newest-cni-535459"
	I1213 20:24:13.872165   79820 addons.go:234] Setting addon metrics-server=true in "newest-cni-535459"
	I1213 20:24:13.872174   79820 config.go:182] Loaded profile config "newest-cni-535459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1213 20:24:13.872182   79820 addons.go:243] addon metrics-server should already be in state true
	I1213 20:24:13.872219   79820 host.go:66] Checking if "newest-cni-535459" exists ...
	I1213 20:24:13.872182   79820 addons.go:234] Setting addon dashboard=true in "newest-cni-535459"
	W1213 20:24:13.872286   79820 addons.go:243] addon dashboard should already be in state true
	I1213 20:24:13.872327   79820 host.go:66] Checking if "newest-cni-535459" exists ...
	I1213 20:24:13.872589   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.872598   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.872618   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.872634   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.872647   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.872667   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.872703   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.872640   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.874676   79820 out.go:177] * Verifying Kubernetes components...
	I1213 20:24:13.875998   79820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:24:13.893363   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I1213 20:24:13.893468   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I1213 20:24:13.893952   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.894024   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36147
	I1213 20:24:13.893961   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.894530   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.894709   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.894722   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.894862   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.894876   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.895087   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.895103   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.895161   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.895204   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.895380   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.895776   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.895816   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.896005   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetState
	I1213 20:24:13.896278   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44887
	I1213 20:24:13.896384   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.896414   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.896800   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.897325   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.897345   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.897762   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.898269   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.898302   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.899617   79820 addons.go:234] Setting addon default-storageclass=true in "newest-cni-535459"
	W1213 20:24:13.899633   79820 addons.go:243] addon default-storageclass should already be in state true
	I1213 20:24:13.899663   79820 host.go:66] Checking if "newest-cni-535459" exists ...
	I1213 20:24:13.900022   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.900056   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.916023   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37017
	I1213 20:24:13.916600   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.916836   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I1213 20:24:13.917124   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.917139   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.917211   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.917661   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.917682   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.917755   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.917969   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetState
	I1213 20:24:13.918150   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.918406   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetState
	I1213 20:24:13.920502   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:24:13.921252   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:24:13.922950   79820 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 20:24:13.922980   79820 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 20:24:13.924173   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34681
	I1213 20:24:13.924523   79820 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 20:24:13.924543   79820 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 20:24:13.924561   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:24:13.924812   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.925357   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.925375   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.925880   79820 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1213 20:24:13.926431   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.926644   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetState
	I1213 20:24:13.927129   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 20:24:13.927146   79820 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 20:24:13.927165   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:24:13.929247   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:24:13.930886   79820 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:24:13.794975   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:13.809490   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:13.809563   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:13.845247   78367 cri.go:89] found id: ""
	I1213 20:24:13.845312   78367 logs.go:282] 0 containers: []
	W1213 20:24:13.845326   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:13.845337   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:13.845404   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:13.891111   78367 cri.go:89] found id: ""
	I1213 20:24:13.891155   78367 logs.go:282] 0 containers: []
	W1213 20:24:13.891167   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:13.891174   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:13.891225   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:13.944404   78367 cri.go:89] found id: ""
	I1213 20:24:13.944423   78367 logs.go:282] 0 containers: []
	W1213 20:24:13.944431   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:13.944438   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:13.944479   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:13.982745   78367 cri.go:89] found id: ""
	I1213 20:24:13.982766   78367 logs.go:282] 0 containers: []
	W1213 20:24:13.982773   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:13.982779   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:13.982823   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:14.018505   78367 cri.go:89] found id: ""
	I1213 20:24:14.018537   78367 logs.go:282] 0 containers: []
	W1213 20:24:14.018547   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:14.018555   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:14.018622   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:14.053196   78367 cri.go:89] found id: ""
	I1213 20:24:14.053222   78367 logs.go:282] 0 containers: []
	W1213 20:24:14.053233   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:14.053241   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:14.053305   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:14.085486   78367 cri.go:89] found id: ""
	I1213 20:24:14.085516   78367 logs.go:282] 0 containers: []
	W1213 20:24:14.085526   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:14.085534   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:14.085600   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:14.123930   78367 cri.go:89] found id: ""
	I1213 20:24:14.123958   78367 logs.go:282] 0 containers: []
	W1213 20:24:14.123968   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:14.123979   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:14.123993   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:14.184665   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:14.184705   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:14.207707   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:14.207742   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:14.317989   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:14.318017   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:14.318037   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:14.440228   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:14.440275   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:13.932098   79820 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:24:13.932112   79820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 20:24:13.932127   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:24:13.934949   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.934951   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:24:13.934975   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.934995   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:24:13.935008   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.935027   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:24:13.935077   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:24:13.935093   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.935143   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.935167   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:24:13.935181   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.935304   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:24:13.935319   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:24:13.935304   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:24:13.935471   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:24:13.935503   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:24:13.935535   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:24:13.935695   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:24:13.935709   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:24:13.935690   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:24:13.936047   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:24:13.940133   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34937
	I1213 20:24:13.940516   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.940964   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.940980   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.941375   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.941957   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.941999   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.965055   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32863
	I1213 20:24:13.966122   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.966772   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.966800   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.967221   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.967423   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetState
	I1213 20:24:13.969213   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:24:13.969387   79820 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 20:24:13.969404   79820 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 20:24:13.969424   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:24:13.971994   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.972410   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:24:13.972431   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.972569   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:24:13.972706   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:24:13.972834   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:24:13.972937   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:24:14.127383   79820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 20:24:14.156652   79820 api_server.go:52] waiting for apiserver process to appear ...
	I1213 20:24:14.156824   79820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:14.175603   79820 api_server.go:72] duration metric: took 303.674582ms to wait for apiserver process to appear ...
	I1213 20:24:14.175692   79820 api_server.go:88] waiting for apiserver healthz status ...
	I1213 20:24:14.175713   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:14.180066   79820 api_server.go:279] https://192.168.50.11:8443/healthz returned 200:
	ok
	I1213 20:24:14.181204   79820 api_server.go:141] control plane version: v1.31.2
	I1213 20:24:14.181224   79820 api_server.go:131] duration metric: took 5.524316ms to wait for apiserver health ...
	I1213 20:24:14.181240   79820 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 20:24:14.186870   79820 system_pods.go:59] 8 kube-system pods found
	I1213 20:24:14.186902   79820 system_pods.go:61] "coredns-7c65d6cfc9-q6mqc" [9f65c257-99b6-466f-91ae-9676625eb834] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 20:24:14.186913   79820 system_pods.go:61] "etcd-newest-cni-535459" [b491d2e0-2d34-4f0b-abf3-1d212ba9f422] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 20:24:14.186926   79820 system_pods.go:61] "kube-apiserver-newest-cni-535459" [6aeeeaed-b2ec-4c7d-ac94-215b57c0bd45] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 20:24:14.186935   79820 system_pods.go:61] "kube-controller-manager-newest-cni-535459" [51cd3848-17b3-493a-87db-d16192d55533] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 20:24:14.186942   79820 system_pods.go:61] "kube-proxy-msh9m" [e538f898-3a04-4e6f-bbf2-fc7fb13b43f4] Running
	I1213 20:24:14.186950   79820 system_pods.go:61] "kube-scheduler-newest-cni-535459" [90d47a04-6a40-461b-a19e-cc3d8a7b92ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 20:24:14.186958   79820 system_pods.go:61] "metrics-server-6867b74b74-29j2k" [cb907d37-be2a-4579-ba77-9c5add245ec1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 20:24:14.186963   79820 system_pods.go:61] "storage-provisioner" [de0598b8-996f-4307-b6c8-e81fa10d6f47] Running
	I1213 20:24:14.186970   79820 system_pods.go:74] duration metric: took 5.722864ms to wait for pod list to return data ...
	I1213 20:24:14.186978   79820 default_sa.go:34] waiting for default service account to be created ...
	I1213 20:24:14.191022   79820 default_sa.go:45] found service account: "default"
	I1213 20:24:14.191047   79820 default_sa.go:55] duration metric: took 4.057067ms for default service account to be created ...
	I1213 20:24:14.191062   79820 kubeadm.go:582] duration metric: took 319.136167ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 20:24:14.191078   79820 node_conditions.go:102] verifying NodePressure condition ...
	I1213 20:24:14.203724   79820 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 20:24:14.203754   79820 node_conditions.go:123] node cpu capacity is 2
	I1213 20:24:14.203765   79820 node_conditions.go:105] duration metric: took 12.682303ms to run NodePressure ...
	I1213 20:24:14.203779   79820 start.go:241] waiting for startup goroutines ...
	I1213 20:24:14.265979   79820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:24:14.322830   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 20:24:14.322892   79820 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 20:24:14.353048   79820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 20:24:14.355217   79820 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 20:24:14.355245   79820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 20:24:14.409641   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 20:24:14.409670   79820 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 20:24:14.425869   79820 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 20:24:14.425901   79820 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 20:24:14.489915   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 20:24:14.490017   79820 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 20:24:14.521997   79820 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 20:24:14.522024   79820 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 20:24:14.564655   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 20:24:14.564686   79820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 20:24:14.614041   79820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 20:24:14.641054   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 20:24:14.641084   79820 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 20:24:14.710567   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 20:24:14.710601   79820 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 20:24:14.745018   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 20:24:14.745055   79820 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 20:24:14.779553   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 20:24:14.779583   79820 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 20:24:14.893256   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 20:24:14.893286   79820 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 20:24:14.933845   79820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 20:24:16.576729   79820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.310647345s)
	I1213 20:24:16.576794   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.576808   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.576827   79820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.223742976s)
	I1213 20:24:16.576868   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.576885   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.576966   79820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.962891887s)
	I1213 20:24:16.576995   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.577005   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.578358   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Closing plugin on server side
	I1213 20:24:16.578370   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Closing plugin on server side
	I1213 20:24:16.578382   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.578394   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Closing plugin on server side
	I1213 20:24:16.578394   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.578402   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.578413   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.578421   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.578424   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.578430   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.578432   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.578442   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.578457   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.578404   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.578486   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.578697   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Closing plugin on server side
	I1213 20:24:16.578728   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.578743   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.578825   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.578853   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.578862   79820 addons.go:475] Verifying addon metrics-server=true in "newest-cni-535459"
	I1213 20:24:16.578921   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Closing plugin on server side
	I1213 20:24:16.578931   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.578944   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.624470   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.624501   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.624775   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.624793   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.847028   79820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.913138549s)
	I1213 20:24:16.847092   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.847111   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.847446   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.847467   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.847482   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.847491   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.847737   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.847764   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.849290   79820 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-535459 addons enable metrics-server
	
	I1213 20:24:16.850380   79820 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1213 20:24:16.851370   79820 addons.go:510] duration metric: took 2.979414999s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1213 20:24:16.851411   79820 start.go:246] waiting for cluster config update ...
	I1213 20:24:16.851425   79820 start.go:255] writing updated cluster config ...
	I1213 20:24:16.851676   79820 ssh_runner.go:195] Run: rm -f paused
	I1213 20:24:16.919885   79820 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1213 20:24:16.921326   79820 out.go:177] * Done! kubectl is now configured to use "newest-cni-535459" cluster and "default" namespace by default
	I1213 20:24:16.992002   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:17.010798   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:17.010887   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:17.054515   78367 cri.go:89] found id: ""
	I1213 20:24:17.054539   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.054548   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:17.054557   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:17.054608   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:17.106222   78367 cri.go:89] found id: ""
	I1213 20:24:17.106258   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.106269   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:17.106276   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:17.106328   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:17.145680   78367 cri.go:89] found id: ""
	I1213 20:24:17.145706   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.145713   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:17.145719   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:17.145772   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:17.183345   78367 cri.go:89] found id: ""
	I1213 20:24:17.183372   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.183383   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:17.183391   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:17.183440   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:17.218181   78367 cri.go:89] found id: ""
	I1213 20:24:17.218214   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.218226   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:17.218233   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:17.218308   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:17.260697   78367 cri.go:89] found id: ""
	I1213 20:24:17.260736   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.260747   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:17.260756   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:17.260815   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:17.296356   78367 cri.go:89] found id: ""
	I1213 20:24:17.296383   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.296394   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:17.296402   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:17.296452   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:17.332909   78367 cri.go:89] found id: ""
	I1213 20:24:17.332936   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.332946   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:17.332956   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:17.332979   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:17.400328   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:17.400361   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:17.419802   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:17.419836   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:17.508687   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:17.508709   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:17.508724   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:17.594401   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:17.594433   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:20.132881   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:20.151309   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:20.151382   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:20.185818   78367 cri.go:89] found id: ""
	I1213 20:24:20.185845   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.185854   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:20.185862   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:20.185913   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:20.227855   78367 cri.go:89] found id: ""
	I1213 20:24:20.227885   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.227895   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:20.227902   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:20.227957   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:20.265126   78367 cri.go:89] found id: ""
	I1213 20:24:20.265149   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.265158   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:20.265165   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:20.265215   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:20.303082   78367 cri.go:89] found id: ""
	I1213 20:24:20.303100   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.303106   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:20.303112   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:20.303148   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:20.334523   78367 cri.go:89] found id: ""
	I1213 20:24:20.334554   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.334565   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:20.334573   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:20.334634   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:20.367872   78367 cri.go:89] found id: ""
	I1213 20:24:20.367904   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.367915   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:20.367922   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:20.367972   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:20.401025   78367 cri.go:89] found id: ""
	I1213 20:24:20.401053   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.401063   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:20.401071   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:20.401118   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:20.437198   78367 cri.go:89] found id: ""
	I1213 20:24:20.437224   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.437232   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:20.437240   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:20.437252   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:20.491638   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:20.491670   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:20.507146   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:20.507176   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:20.586662   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:20.586708   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:20.586725   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:20.677650   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:20.677702   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:23.226457   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:23.240139   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:23.240197   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:23.276469   78367 cri.go:89] found id: ""
	I1213 20:24:23.276503   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.276514   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:23.276522   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:23.276576   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:23.321764   78367 cri.go:89] found id: ""
	I1213 20:24:23.321793   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.321804   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:23.321811   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:23.321860   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:23.355263   78367 cri.go:89] found id: ""
	I1213 20:24:23.355297   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.355308   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:23.355315   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:23.355368   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:23.396846   78367 cri.go:89] found id: ""
	I1213 20:24:23.396875   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.396885   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:23.396894   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:23.396955   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:23.435540   78367 cri.go:89] found id: ""
	I1213 20:24:23.435567   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.435578   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:23.435586   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:23.435634   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:23.473920   78367 cri.go:89] found id: ""
	I1213 20:24:23.473944   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.473959   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:23.473967   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:23.474023   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:23.507136   78367 cri.go:89] found id: ""
	I1213 20:24:23.507168   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.507177   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:23.507183   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:23.507239   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:23.539050   78367 cri.go:89] found id: ""
	I1213 20:24:23.539075   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.539083   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:23.539091   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:23.539104   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:23.553000   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:23.553026   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:23.619106   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:23.619128   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:23.619143   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:23.704028   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:23.704065   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:23.740575   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:23.740599   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:26.290469   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:26.303070   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:26.303114   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:26.333881   78367 cri.go:89] found id: ""
	I1213 20:24:26.333902   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.333909   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:26.333915   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:26.333957   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:26.367218   78367 cri.go:89] found id: ""
	I1213 20:24:26.367246   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.367253   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:26.367258   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:26.367314   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:26.397281   78367 cri.go:89] found id: ""
	I1213 20:24:26.397313   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.397325   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:26.397332   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:26.397388   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:26.429238   78367 cri.go:89] found id: ""
	I1213 20:24:26.429260   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.429270   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:26.429290   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:26.429335   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:26.457723   78367 cri.go:89] found id: ""
	I1213 20:24:26.457751   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.457760   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:26.457765   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:26.457820   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:26.487066   78367 cri.go:89] found id: ""
	I1213 20:24:26.487086   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.487093   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:26.487098   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:26.487153   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:26.517336   78367 cri.go:89] found id: ""
	I1213 20:24:26.517360   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.517367   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:26.517373   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:26.517428   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:26.547918   78367 cri.go:89] found id: ""
	I1213 20:24:26.547940   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.547947   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:26.547955   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:26.547966   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:26.614500   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:26.614527   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:26.614541   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:26.688954   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:26.688983   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:26.723430   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:26.723453   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:26.771679   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:26.771707   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:29.284113   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:29.296309   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:29.296365   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:29.335369   78367 cri.go:89] found id: ""
	I1213 20:24:29.335395   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.335404   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:29.335411   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:29.335477   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:29.364958   78367 cri.go:89] found id: ""
	I1213 20:24:29.364996   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.365005   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:29.365011   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:29.365056   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:29.395763   78367 cri.go:89] found id: ""
	I1213 20:24:29.395785   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.395792   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:29.395798   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:29.395847   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:29.426100   78367 cri.go:89] found id: ""
	I1213 20:24:29.426131   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.426141   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:29.426148   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:29.426212   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:29.454982   78367 cri.go:89] found id: ""
	I1213 20:24:29.455011   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.455018   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:29.455025   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:29.455086   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:29.490059   78367 cri.go:89] found id: ""
	I1213 20:24:29.490088   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.490098   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:29.490105   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:29.490164   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:29.523139   78367 cri.go:89] found id: ""
	I1213 20:24:29.523170   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.523179   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:29.523184   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:29.523235   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:29.553382   78367 cri.go:89] found id: ""
	I1213 20:24:29.553411   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.553422   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:29.553432   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:29.553445   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:29.603370   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:29.603399   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:29.615270   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:29.615296   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:29.676210   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:29.676241   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:29.676256   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:29.748591   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:29.748620   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:32.283657   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:32.295699   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:32.295770   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:32.326072   78367 cri.go:89] found id: ""
	I1213 20:24:32.326100   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.326109   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:32.326116   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:32.326174   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:32.359219   78367 cri.go:89] found id: ""
	I1213 20:24:32.359267   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.359279   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:32.359287   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:32.359374   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:32.389664   78367 cri.go:89] found id: ""
	I1213 20:24:32.389687   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.389694   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:32.389700   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:32.389756   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:32.419871   78367 cri.go:89] found id: ""
	I1213 20:24:32.419893   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.419899   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:32.419904   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:32.419955   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:32.449254   78367 cri.go:89] found id: ""
	I1213 20:24:32.449282   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.449292   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:32.449300   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:32.449359   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:32.477857   78367 cri.go:89] found id: ""
	I1213 20:24:32.477887   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.477897   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:32.477905   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:32.477965   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:32.507395   78367 cri.go:89] found id: ""
	I1213 20:24:32.507420   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.507429   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:32.507437   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:32.507493   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:32.536846   78367 cri.go:89] found id: ""
	I1213 20:24:32.536882   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.536894   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:32.536904   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:32.536918   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:32.586510   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:32.586540   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:32.598914   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:32.598941   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:32.661653   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:32.661673   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:32.661686   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:32.738149   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:32.738180   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:35.274525   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:35.287259   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:35.287338   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:35.321233   78367 cri.go:89] found id: ""
	I1213 20:24:35.321269   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.321280   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:35.321287   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:35.321350   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:35.351512   78367 cri.go:89] found id: ""
	I1213 20:24:35.351535   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.351543   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:35.351549   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:35.351607   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:35.380770   78367 cri.go:89] found id: ""
	I1213 20:24:35.380795   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.380805   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:35.380812   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:35.380868   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:35.410311   78367 cri.go:89] found id: ""
	I1213 20:24:35.410339   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.410348   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:35.410356   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:35.410410   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:35.437955   78367 cri.go:89] found id: ""
	I1213 20:24:35.437979   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.437987   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:35.437992   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:35.438039   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:35.467621   78367 cri.go:89] found id: ""
	I1213 20:24:35.467646   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.467657   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:35.467665   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:35.467729   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:35.496779   78367 cri.go:89] found id: ""
	I1213 20:24:35.496801   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.496809   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:35.496814   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:35.496867   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:35.527107   78367 cri.go:89] found id: ""
	I1213 20:24:35.527140   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.527148   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:35.527157   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:35.527167   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:35.573444   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:35.573472   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:35.586107   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:35.586129   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:35.647226   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:35.647249   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:35.647261   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:35.721264   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:35.721297   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:38.256983   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:38.269600   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:38.269665   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:38.304526   78367 cri.go:89] found id: ""
	I1213 20:24:38.304552   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.304559   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:38.304566   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:38.304621   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:38.334858   78367 cri.go:89] found id: ""
	I1213 20:24:38.334885   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.334896   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:38.334902   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:38.334959   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:38.364281   78367 cri.go:89] found id: ""
	I1213 20:24:38.364305   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.364312   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:38.364318   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:38.364364   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:38.393853   78367 cri.go:89] found id: ""
	I1213 20:24:38.393878   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.393886   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:38.393892   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:38.393936   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:38.424196   78367 cri.go:89] found id: ""
	I1213 20:24:38.424225   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.424234   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:38.424241   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:38.424305   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:38.454285   78367 cri.go:89] found id: ""
	I1213 20:24:38.454311   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.454322   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:38.454330   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:38.454382   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:38.483158   78367 cri.go:89] found id: ""
	I1213 20:24:38.483187   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.483194   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:38.483199   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:38.483250   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:38.512116   78367 cri.go:89] found id: ""
	I1213 20:24:38.512149   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.512161   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:38.512172   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:38.512186   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:38.587026   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:38.587053   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:38.587069   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:38.661024   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:38.661055   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:38.695893   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:38.695922   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:38.746253   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:38.746282   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:41.258578   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:41.271632   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:41.271691   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:41.303047   78367 cri.go:89] found id: ""
	I1213 20:24:41.303073   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.303081   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:41.303087   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:41.303149   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:41.334605   78367 cri.go:89] found id: ""
	I1213 20:24:41.334642   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.334653   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:41.334662   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:41.334714   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:41.367617   78367 cri.go:89] found id: ""
	I1213 20:24:41.367650   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.367661   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:41.367670   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:41.367724   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:41.399772   78367 cri.go:89] found id: ""
	I1213 20:24:41.399800   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.399811   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:41.399819   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:41.399880   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:41.431833   78367 cri.go:89] found id: ""
	I1213 20:24:41.431869   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.431879   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:41.431887   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:41.431948   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:41.462640   78367 cri.go:89] found id: ""
	I1213 20:24:41.462669   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.462679   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:41.462688   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:41.462757   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:41.492716   78367 cri.go:89] found id: ""
	I1213 20:24:41.492748   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.492758   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:41.492764   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:41.492823   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:41.527697   78367 cri.go:89] found id: ""
	I1213 20:24:41.527729   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.527739   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:41.527750   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:41.527763   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:41.540507   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:41.540530   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:41.602837   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:41.602873   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:41.602888   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:41.676818   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:41.676855   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:41.713699   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:41.713731   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:44.263397   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:44.275396   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:44.275463   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:44.306065   78367 cri.go:89] found id: ""
	I1213 20:24:44.306095   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.306106   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:44.306114   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:44.306170   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:44.336701   78367 cri.go:89] found id: ""
	I1213 20:24:44.336734   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.336746   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:44.336754   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:44.336803   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:44.367523   78367 cri.go:89] found id: ""
	I1213 20:24:44.367553   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.367564   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:44.367571   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:44.367626   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:44.397934   78367 cri.go:89] found id: ""
	I1213 20:24:44.397960   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.397970   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:44.397978   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:44.398043   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:44.428770   78367 cri.go:89] found id: ""
	I1213 20:24:44.428799   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.428810   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:44.428817   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:44.428874   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:44.459961   78367 cri.go:89] found id: ""
	I1213 20:24:44.459999   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.460011   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:44.460018   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:44.460068   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:44.491377   78367 cri.go:89] found id: ""
	I1213 20:24:44.491407   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.491419   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:44.491426   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:44.491488   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:44.521764   78367 cri.go:89] found id: ""
	I1213 20:24:44.521798   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.521808   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:44.521819   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:44.521835   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:44.584292   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:44.584316   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:44.584328   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:44.654841   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:44.654880   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:44.689572   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:44.689598   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:44.738234   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:44.738265   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:47.250759   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:47.262717   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:47.262786   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:47.291884   78367 cri.go:89] found id: ""
	I1213 20:24:47.291910   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.291917   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:47.291923   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:47.291968   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:47.322010   78367 cri.go:89] found id: ""
	I1213 20:24:47.322036   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.322047   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:47.322056   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:47.322114   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:47.352441   78367 cri.go:89] found id: ""
	I1213 20:24:47.352470   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.352478   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:47.352483   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:47.352535   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:47.382622   78367 cri.go:89] found id: ""
	I1213 20:24:47.382646   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.382653   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:47.382659   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:47.382709   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:47.413127   78367 cri.go:89] found id: ""
	I1213 20:24:47.413149   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.413156   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:47.413161   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:47.413212   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:47.445397   78367 cri.go:89] found id: ""
	I1213 20:24:47.445423   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.445430   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:47.445435   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:47.445483   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:47.475871   78367 cri.go:89] found id: ""
	I1213 20:24:47.475897   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.475904   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:47.475910   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:47.475966   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:47.505357   78367 cri.go:89] found id: ""
	I1213 20:24:47.505382   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.505389   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:47.505397   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:47.505407   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:47.568960   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:47.568982   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:47.569010   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:47.646228   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:47.646262   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:47.679590   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:47.679616   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:47.726854   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:47.726884   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:50.239188   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:50.251010   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:50.251061   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:50.281168   78367 cri.go:89] found id: ""
	I1213 20:24:50.281194   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.281204   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:50.281211   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:50.281277   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:50.310396   78367 cri.go:89] found id: ""
	I1213 20:24:50.310421   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.310431   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:50.310438   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:50.310491   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:50.340824   78367 cri.go:89] found id: ""
	I1213 20:24:50.340856   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.340866   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:50.340873   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:50.340937   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:50.377401   78367 cri.go:89] found id: ""
	I1213 20:24:50.377430   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.377437   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:50.377443   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:50.377500   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:50.406521   78367 cri.go:89] found id: ""
	I1213 20:24:50.406552   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.406562   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:50.406567   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:50.406632   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:50.440070   78367 cri.go:89] found id: ""
	I1213 20:24:50.440101   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.440112   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:50.440118   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:50.440168   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:50.473103   78367 cri.go:89] found id: ""
	I1213 20:24:50.473134   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.473145   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:50.473152   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:50.473218   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:50.503787   78367 cri.go:89] found id: ""
	I1213 20:24:50.503815   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.503824   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:50.503832   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:50.503842   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:50.551379   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:50.551407   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:50.563705   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:50.563732   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:50.625016   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:50.625046   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:50.625062   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:50.717566   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:50.717601   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:53.254296   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:53.266940   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:53.266995   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:53.302975   78367 cri.go:89] found id: ""
	I1213 20:24:53.303000   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.303008   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:53.303013   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:53.303080   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:53.338434   78367 cri.go:89] found id: ""
	I1213 20:24:53.338461   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.338469   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:53.338474   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:53.338526   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:53.375117   78367 cri.go:89] found id: ""
	I1213 20:24:53.375146   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.375156   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:53.375164   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:53.375221   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:53.413376   78367 cri.go:89] found id: ""
	I1213 20:24:53.413406   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.413416   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:53.413423   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:53.413482   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:53.447697   78367 cri.go:89] found id: ""
	I1213 20:24:53.447725   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.447736   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:53.447743   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:53.447802   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:53.480987   78367 cri.go:89] found id: ""
	I1213 20:24:53.481019   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.481037   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:53.481045   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:53.481149   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:53.516573   78367 cri.go:89] found id: ""
	I1213 20:24:53.516602   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.516611   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:53.516617   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:53.516664   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:53.552098   78367 cri.go:89] found id: ""
	I1213 20:24:53.552128   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.552144   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:53.552155   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:53.552168   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:53.632362   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:53.632393   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:53.667030   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:53.667061   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:53.716328   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:53.716355   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:53.730194   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:53.730219   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:53.804612   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:56.305032   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:56.317875   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:56.317934   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:56.353004   78367 cri.go:89] found id: ""
	I1213 20:24:56.353027   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.353035   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:56.353040   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:56.353086   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:56.398694   78367 cri.go:89] found id: ""
	I1213 20:24:56.398722   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.398731   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:56.398739   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:56.398800   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:56.430481   78367 cri.go:89] found id: ""
	I1213 20:24:56.430512   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.430523   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:56.430530   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:56.430589   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:56.460467   78367 cri.go:89] found id: ""
	I1213 20:24:56.460501   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.460512   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:56.460520   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:56.460583   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:56.490776   78367 cri.go:89] found id: ""
	I1213 20:24:56.490804   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.490814   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:56.490822   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:56.490889   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:56.520440   78367 cri.go:89] found id: ""
	I1213 20:24:56.520466   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.520473   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:56.520478   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:56.520525   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:56.550233   78367 cri.go:89] found id: ""
	I1213 20:24:56.550258   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.550266   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:56.550271   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:56.550347   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:56.580651   78367 cri.go:89] found id: ""
	I1213 20:24:56.580681   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.580692   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:56.580703   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:56.580716   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:56.650811   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:56.650839   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:56.650892   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:56.728061   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:56.728089   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:56.767782   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:56.767809   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:56.818747   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:56.818781   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:59.331474   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:59.344319   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:59.344379   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:59.373901   78367 cri.go:89] found id: ""
	I1213 20:24:59.373931   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.373941   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:59.373947   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:59.373999   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:59.405800   78367 cri.go:89] found id: ""
	I1213 20:24:59.405832   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.405844   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:59.405851   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:59.405922   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:59.435487   78367 cri.go:89] found id: ""
	I1213 20:24:59.435517   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.435527   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:59.435535   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:59.435587   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:59.466466   78367 cri.go:89] found id: ""
	I1213 20:24:59.466489   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.466497   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:59.466502   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:59.466543   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:59.500301   78367 cri.go:89] found id: ""
	I1213 20:24:59.500330   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.500337   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:59.500342   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:59.500387   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:59.532614   78367 cri.go:89] found id: ""
	I1213 20:24:59.532642   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.532651   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:59.532658   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:59.532717   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:59.562990   78367 cri.go:89] found id: ""
	I1213 20:24:59.563013   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.563020   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:59.563034   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:59.563078   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:59.593335   78367 cri.go:89] found id: ""
	I1213 20:24:59.593366   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.593376   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:59.593386   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:59.593401   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:59.659058   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:59.659083   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:59.659097   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:59.733569   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:59.733600   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:59.770151   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:59.770178   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:59.820506   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:59.820534   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:02.334083   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:02.346559   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:02.346714   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:02.380346   78367 cri.go:89] found id: ""
	I1213 20:25:02.380376   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.380384   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:02.380390   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:02.380441   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:02.412347   78367 cri.go:89] found id: ""
	I1213 20:25:02.412374   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.412385   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:02.412392   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:02.412453   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:02.443408   78367 cri.go:89] found id: ""
	I1213 20:25:02.443441   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.443453   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:02.443461   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:02.443514   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:02.474165   78367 cri.go:89] found id: ""
	I1213 20:25:02.474193   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.474201   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:02.474206   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:02.474272   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:02.505076   78367 cri.go:89] found id: ""
	I1213 20:25:02.505109   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.505121   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:02.505129   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:02.505186   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:02.541145   78367 cri.go:89] found id: ""
	I1213 20:25:02.541174   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.541182   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:02.541187   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:02.541236   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:02.579150   78367 cri.go:89] found id: ""
	I1213 20:25:02.579183   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.579194   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:02.579201   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:02.579262   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:02.611542   78367 cri.go:89] found id: ""
	I1213 20:25:02.611582   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.611594   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:02.611607   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:02.611620   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:02.661145   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:02.661183   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:02.673918   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:02.673944   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:02.745321   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:02.745345   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:02.745358   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:02.820953   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:02.820992   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:05.373838   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:05.386758   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:05.386833   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:05.419177   78367 cri.go:89] found id: ""
	I1213 20:25:05.419205   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.419215   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:05.419223   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:05.419292   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:05.450595   78367 cri.go:89] found id: ""
	I1213 20:25:05.450628   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.450639   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:05.450648   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:05.450707   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:05.481818   78367 cri.go:89] found id: ""
	I1213 20:25:05.481844   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.481852   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:05.481857   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:05.481902   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:05.517195   78367 cri.go:89] found id: ""
	I1213 20:25:05.517230   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.517239   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:05.517246   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:05.517302   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:05.548698   78367 cri.go:89] found id: ""
	I1213 20:25:05.548733   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.548744   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:05.548753   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:05.548811   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:05.579983   78367 cri.go:89] found id: ""
	I1213 20:25:05.580009   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.580015   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:05.580022   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:05.580070   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:05.610660   78367 cri.go:89] found id: ""
	I1213 20:25:05.610685   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.610693   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:05.610699   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:05.610750   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:05.641572   78367 cri.go:89] found id: ""
	I1213 20:25:05.641598   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.641605   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:05.641614   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:05.641625   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:05.712243   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:05.712264   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:05.712275   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:05.793232   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:05.793271   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:05.827863   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:05.827901   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:05.877641   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:05.877671   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:08.390425   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:08.402888   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:08.402944   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:08.436903   78367 cri.go:89] found id: ""
	I1213 20:25:08.436931   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.436941   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:08.436948   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:08.437005   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:08.469526   78367 cri.go:89] found id: ""
	I1213 20:25:08.469561   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.469574   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:08.469581   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:08.469644   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:08.500136   78367 cri.go:89] found id: ""
	I1213 20:25:08.500165   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.500172   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:08.500178   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:08.500223   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:08.537556   78367 cri.go:89] found id: ""
	I1213 20:25:08.537591   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.537603   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:08.537611   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:08.537669   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:08.577468   78367 cri.go:89] found id: ""
	I1213 20:25:08.577492   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.577501   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:08.577509   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:08.577566   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:08.632075   78367 cri.go:89] found id: ""
	I1213 20:25:08.632103   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.632113   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:08.632120   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:08.632178   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:08.671119   78367 cri.go:89] found id: ""
	I1213 20:25:08.671148   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.671158   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:08.671166   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:08.671225   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:08.700873   78367 cri.go:89] found id: ""
	I1213 20:25:08.700900   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.700908   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:08.700916   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:08.700927   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:08.713084   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:08.713107   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:08.780299   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:08.780331   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:08.780346   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:08.851830   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:08.851865   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:08.886834   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:08.886883   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:11.435256   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:11.447096   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:11.447155   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:11.477376   78367 cri.go:89] found id: ""
	I1213 20:25:11.477403   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.477411   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:11.477416   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:11.477460   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:11.507532   78367 cri.go:89] found id: ""
	I1213 20:25:11.507564   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.507572   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:11.507582   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:11.507628   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:11.537352   78367 cri.go:89] found id: ""
	I1213 20:25:11.537383   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.537393   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:11.537400   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:11.537450   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:11.567653   78367 cri.go:89] found id: ""
	I1213 20:25:11.567681   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.567693   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:11.567700   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:11.567756   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:11.597752   78367 cri.go:89] found id: ""
	I1213 20:25:11.597782   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.597790   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:11.597795   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:11.597840   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:11.626231   78367 cri.go:89] found id: ""
	I1213 20:25:11.626258   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.626269   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:11.626276   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:11.626334   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:11.655694   78367 cri.go:89] found id: ""
	I1213 20:25:11.655724   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.655733   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:11.655740   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:11.655794   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:11.685714   78367 cri.go:89] found id: ""
	I1213 20:25:11.685742   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.685750   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:11.685758   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:11.685768   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:11.733749   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:11.733774   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:11.746307   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:11.746330   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:11.807168   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:11.807190   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:11.807202   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:11.878490   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:11.878522   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:14.416516   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:14.428258   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:14.428339   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:14.458229   78367 cri.go:89] found id: ""
	I1213 20:25:14.458255   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.458263   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:14.458272   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:14.458326   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:14.488061   78367 cri.go:89] found id: ""
	I1213 20:25:14.488101   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.488109   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:14.488114   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:14.488159   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:14.516854   78367 cri.go:89] found id: ""
	I1213 20:25:14.516880   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.516888   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:14.516893   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:14.516953   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:14.549881   78367 cri.go:89] found id: ""
	I1213 20:25:14.549908   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.549919   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:14.549925   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:14.549982   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:14.579410   78367 cri.go:89] found id: ""
	I1213 20:25:14.579439   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.579449   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:14.579457   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:14.579507   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:14.609126   78367 cri.go:89] found id: ""
	I1213 20:25:14.609155   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.609163   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:14.609169   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:14.609216   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:14.638655   78367 cri.go:89] found id: ""
	I1213 20:25:14.638682   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.638689   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:14.638694   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:14.638739   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:14.667950   78367 cri.go:89] found id: ""
	I1213 20:25:14.667977   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.667986   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:14.667997   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:14.668011   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:14.705223   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:14.705250   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:14.753645   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:14.753671   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:14.766082   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:14.766106   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:14.826802   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:14.826829   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:14.826841   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:17.400518   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:17.412464   78367 kubeadm.go:597] duration metric: took 4m2.435244002s to restartPrimaryControlPlane
	W1213 20:25:17.412536   78367 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 20:25:17.412564   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 20:25:19.422149   78367 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.009561199s)
	I1213 20:25:19.422215   78367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:25:19.435431   78367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 20:25:19.444465   78367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:25:19.452996   78367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:25:19.453011   78367 kubeadm.go:157] found existing configuration files:
	
	I1213 20:25:19.453051   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:25:19.461055   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:25:19.461096   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:25:19.469525   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:25:19.477399   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:25:19.477442   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:25:19.485719   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:25:19.493837   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:25:19.493895   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:25:19.502493   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:25:19.510479   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:25:19.510525   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:25:19.518746   78367 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 20:25:19.585664   78367 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1213 20:25:19.585781   78367 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 20:25:19.709117   78367 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 20:25:19.709242   78367 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 20:25:19.709362   78367 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 20:25:19.865449   78367 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 20:25:19.867503   78367 out.go:235]   - Generating certificates and keys ...
	I1213 20:25:19.867605   78367 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 20:25:19.867668   78367 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 20:25:19.867759   78367 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 20:25:19.867864   78367 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1213 20:25:19.867978   78367 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 20:25:19.868062   78367 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1213 20:25:19.868159   78367 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1213 20:25:19.868251   78367 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1213 20:25:19.868515   78367 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 20:25:19.868889   78367 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 20:25:19.869062   78367 kubeadm.go:310] [certs] Using the existing "sa" key
	I1213 20:25:19.869157   78367 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 20:25:19.955108   78367 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 20:25:20.380950   78367 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 20:25:20.496704   78367 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 20:25:20.598530   78367 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 20:25:20.612045   78367 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 20:25:20.613742   78367 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 20:25:20.613809   78367 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 20:25:20.733629   78367 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 20:25:20.735476   78367 out.go:235]   - Booting up control plane ...
	I1213 20:25:20.735586   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 20:25:20.739585   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 20:25:20.740414   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 20:25:20.741056   78367 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 20:25:20.743491   78367 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 20:26:00.744556   78367 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1213 20:26:00.745298   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:26:00.745523   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:26:05.746023   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:26:05.746244   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:26:15.746586   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:26:15.746767   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:26:35.747606   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:26:35.747803   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:27:15.749327   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:27:15.749616   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:27:15.749642   78367 kubeadm.go:310] 
	I1213 20:27:15.749705   78367 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1213 20:27:15.749763   78367 kubeadm.go:310] 		timed out waiting for the condition
	I1213 20:27:15.749771   78367 kubeadm.go:310] 
	I1213 20:27:15.749801   78367 kubeadm.go:310] 	This error is likely caused by:
	I1213 20:27:15.749858   78367 kubeadm.go:310] 		- The kubelet is not running
	I1213 20:27:15.749970   78367 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 20:27:15.749978   78367 kubeadm.go:310] 
	I1213 20:27:15.750116   78367 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 20:27:15.750147   78367 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1213 20:27:15.750175   78367 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1213 20:27:15.750182   78367 kubeadm.go:310] 
	I1213 20:27:15.750323   78367 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1213 20:27:15.750445   78367 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1213 20:27:15.750469   78367 kubeadm.go:310] 
	I1213 20:27:15.750594   78367 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1213 20:27:15.750679   78367 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1213 20:27:15.750750   78367 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1213 20:27:15.750838   78367 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1213 20:27:15.750867   78367 kubeadm.go:310] 
	I1213 20:27:15.751901   78367 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 20:27:15.752044   78367 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1213 20:27:15.752128   78367 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1213 20:27:15.752253   78367 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 20:27:15.752296   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 20:27:16.207985   78367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:27:16.221729   78367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:27:16.230896   78367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:27:16.230915   78367 kubeadm.go:157] found existing configuration files:
	
	I1213 20:27:16.230963   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:27:16.239780   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:27:16.239853   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:27:16.248841   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:27:16.257494   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:27:16.257547   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:27:16.266220   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:27:16.274395   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:27:16.274446   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:27:16.282941   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:27:16.291155   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:27:16.291206   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:27:16.299780   78367 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 20:27:16.492967   78367 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 20:29:12.537014   78367 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1213 20:29:12.537124   78367 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1213 20:29:12.538949   78367 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1213 20:29:12.539024   78367 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 20:29:12.539128   78367 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 20:29:12.539224   78367 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 20:29:12.539305   78367 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 20:29:12.539357   78367 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 20:29:12.540964   78367 out.go:235]   - Generating certificates and keys ...
	I1213 20:29:12.541051   78367 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 20:29:12.541164   78367 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 20:29:12.541297   78367 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 20:29:12.541385   78367 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1213 20:29:12.541510   78367 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 20:29:12.541593   78367 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1213 20:29:12.541696   78367 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1213 20:29:12.541764   78367 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1213 20:29:12.541825   78367 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 20:29:12.541886   78367 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 20:29:12.541918   78367 kubeadm.go:310] [certs] Using the existing "sa" key
	I1213 20:29:12.541993   78367 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 20:29:12.542062   78367 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 20:29:12.542141   78367 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 20:29:12.542249   78367 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 20:29:12.542337   78367 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 20:29:12.542454   78367 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 20:29:12.542564   78367 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 20:29:12.542608   78367 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 20:29:12.542689   78367 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 20:29:12.544295   78367 out.go:235]   - Booting up control plane ...
	I1213 20:29:12.544374   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 20:29:12.544440   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 20:29:12.544496   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 20:29:12.544566   78367 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 20:29:12.544708   78367 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 20:29:12.544763   78367 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1213 20:29:12.544822   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.544980   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545046   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.545210   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545282   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.545456   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545529   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.545681   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545742   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.545910   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545920   78367 kubeadm.go:310] 
	I1213 20:29:12.545956   78367 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1213 20:29:12.545989   78367 kubeadm.go:310] 		timed out waiting for the condition
	I1213 20:29:12.545999   78367 kubeadm.go:310] 
	I1213 20:29:12.546026   78367 kubeadm.go:310] 	This error is likely caused by:
	I1213 20:29:12.546053   78367 kubeadm.go:310] 		- The kubelet is not running
	I1213 20:29:12.546145   78367 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 20:29:12.546153   78367 kubeadm.go:310] 
	I1213 20:29:12.546246   78367 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 20:29:12.546317   78367 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1213 20:29:12.546377   78367 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1213 20:29:12.546386   78367 kubeadm.go:310] 
	I1213 20:29:12.546485   78367 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1213 20:29:12.546561   78367 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1213 20:29:12.546568   78367 kubeadm.go:310] 
	I1213 20:29:12.546677   78367 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1213 20:29:12.546761   78367 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1213 20:29:12.546831   78367 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1213 20:29:12.546913   78367 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1213 20:29:12.546942   78367 kubeadm.go:310] 
	I1213 20:29:12.546976   78367 kubeadm.go:394] duration metric: took 7m57.617019103s to StartCluster
	I1213 20:29:12.547025   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:29:12.547089   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:29:12.589567   78367 cri.go:89] found id: ""
	I1213 20:29:12.589592   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.589599   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:29:12.589605   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:29:12.589660   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:29:12.621414   78367 cri.go:89] found id: ""
	I1213 20:29:12.621438   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.621445   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:29:12.621450   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:29:12.621510   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:29:12.652624   78367 cri.go:89] found id: ""
	I1213 20:29:12.652655   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.652666   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:29:12.652674   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:29:12.652739   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:29:12.682651   78367 cri.go:89] found id: ""
	I1213 20:29:12.682683   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.682693   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:29:12.682701   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:29:12.682767   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:29:12.714100   78367 cri.go:89] found id: ""
	I1213 20:29:12.714127   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.714134   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:29:12.714140   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:29:12.714194   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:29:12.745402   78367 cri.go:89] found id: ""
	I1213 20:29:12.745436   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.745446   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:29:12.745454   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:29:12.745515   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:29:12.775916   78367 cri.go:89] found id: ""
	I1213 20:29:12.775942   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.775949   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:29:12.775954   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:29:12.776009   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:29:12.806128   78367 cri.go:89] found id: ""
	I1213 20:29:12.806161   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.806171   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:29:12.806183   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:29:12.806197   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:29:12.841122   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:29:12.841151   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:29:12.888169   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:29:12.888203   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:29:12.900707   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:29:12.900733   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:29:12.969370   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:29:12.969408   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:29:12.969423   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 20:29:13.074903   78367 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1213 20:29:13.074961   78367 out.go:270] * 
	W1213 20:29:13.075016   78367 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 20:29:13.075034   78367 out.go:270] * 
	W1213 20:29:13.075878   78367 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 20:29:13.079429   78367 out.go:201] 
	W1213 20:29:13.080898   78367 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 20:29:13.080953   78367 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 20:29:13.080984   78367 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 20:29:13.082622   78367 out.go:201] 
	
	
	==> CRI-O <==
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.151582325Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734121754151559771,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=655e2f59-c889-4684-af61-922903d9b07d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.152026117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62be3a80-aaf9-4ad2-8b45-14a64849b43e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.152070250Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62be3a80-aaf9-4ad2-8b45-14a64849b43e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.152100456Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=62be3a80-aaf9-4ad2-8b45-14a64849b43e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.179582341Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a717924a-64c0-4f6b-b4ae-5be72f12efc4 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.179689673Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a717924a-64c0-4f6b-b4ae-5be72f12efc4 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.180529794Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3990afb7-d390-4be4-bae8-f23e129ba2f7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.180902384Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734121754180883751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3990afb7-d390-4be4-bae8-f23e129ba2f7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.181268960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6962a52-66b9-4dd6-825e-73bec540a859 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.181339391Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6962a52-66b9-4dd6-825e-73bec540a859 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.181388386Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a6962a52-66b9-4dd6-825e-73bec540a859 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.208592510Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87190d74-62a8-47fe-a647-7d9f91c2bb6d name=/runtime.v1.RuntimeService/Version
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.208700112Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87190d74-62a8-47fe-a647-7d9f91c2bb6d name=/runtime.v1.RuntimeService/Version
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.209498527Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e8408b30-5a36-48c3-a5d2-f06b07b4ea26 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.209887855Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734121754209870143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e8408b30-5a36-48c3-a5d2-f06b07b4ea26 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.210286936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebf7a319-ac1f-4872-b4ac-e898a2db043c name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.210328643Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebf7a319-ac1f-4872-b4ac-e898a2db043c name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.210357212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ebf7a319-ac1f-4872-b4ac-e898a2db043c name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.238053086Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b68c736-8506-4823-8cfc-6aef15d9bb2a name=/runtime.v1.RuntimeService/Version
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.238115398Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b68c736-8506-4823-8cfc-6aef15d9bb2a name=/runtime.v1.RuntimeService/Version
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.239265407Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a200a54e-1893-41b4-8b12-ff290053acb9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.239680175Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734121754239622736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a200a54e-1893-41b4-8b12-ff290053acb9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.240142087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d73d2d0e-6a46-4f99-ae41-ad32152b1992 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.240186180Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d73d2d0e-6a46-4f99-ae41-ad32152b1992 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:29:14 old-k8s-version-613355 crio[625]: time="2024-12-13 20:29:14.240219642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d73d2d0e-6a46-4f99-ae41-ad32152b1992 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 20:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060967] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039950] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.018359] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.144058] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.571428] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec13 20:21] systemd-fstab-generator[552]: Ignoring "noauto" option for root device
	[  +0.064800] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055429] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.157241] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.148226] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.222516] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +6.266047] systemd-fstab-generator[871]: Ignoring "noauto" option for root device
	[  +0.062703] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.713915] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[ +12.418230] kauditd_printk_skb: 46 callbacks suppressed
	[Dec13 20:25] systemd-fstab-generator[5046]: Ignoring "noauto" option for root device
	[Dec13 20:27] systemd-fstab-generator[5322]: Ignoring "noauto" option for root device
	[  +0.061209] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:29:14 up 8 min,  0 users,  load average: 0.01, 0.15, 0.11
	Linux old-k8s-version-613355 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 13 20:29:12 old-k8s-version-613355 kubelet[5501]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Dec 13 20:29:12 old-k8s-version-613355 kubelet[5501]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Dec 13 20:29:12 old-k8s-version-613355 kubelet[5501]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Dec 13 20:29:12 old-k8s-version-613355 kubelet[5501]: goroutine 151 [select]:
	Dec 13 20:29:12 old-k8s-version-613355 kubelet[5501]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000db5ef0, 0x4f0ac20, 0xc0007649b0, 0x1, 0xc0001020c0)
	Dec 13 20:29:12 old-k8s-version-613355 kubelet[5501]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Dec 13 20:29:12 old-k8s-version-613355 kubelet[5501]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d02a0, 0xc0001020c0)
	Dec 13 20:29:12 old-k8s-version-613355 kubelet[5501]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Dec 13 20:29:12 old-k8s-version-613355 kubelet[5501]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Dec 13 20:29:12 old-k8s-version-613355 kubelet[5501]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Dec 13 20:29:12 old-k8s-version-613355 kubelet[5501]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0009949a0, 0xc000407d80)
	Dec 13 20:29:12 old-k8s-version-613355 kubelet[5501]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Dec 13 20:29:12 old-k8s-version-613355 kubelet[5501]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Dec 13 20:29:12 old-k8s-version-613355 kubelet[5501]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Dec 13 20:29:12 old-k8s-version-613355 kubelet[5501]: E1213 20:29:12.390106    5501 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dold-k8s-version-613355&limit=500&resourceVersion=0": dial tcp 192.168.72.134:8443: connect: connection refused
	Dec 13 20:29:12 old-k8s-version-613355 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 13 20:29:12 old-k8s-version-613355 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 20:29:13 old-k8s-version-613355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Dec 13 20:29:13 old-k8s-version-613355 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 13 20:29:13 old-k8s-version-613355 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 13 20:29:13 old-k8s-version-613355 kubelet[5568]: I1213 20:29:13.132823    5568 server.go:416] Version: v1.20.0
	Dec 13 20:29:13 old-k8s-version-613355 kubelet[5568]: I1213 20:29:13.133094    5568 server.go:837] Client rotation is on, will bootstrap in background
	Dec 13 20:29:13 old-k8s-version-613355 kubelet[5568]: I1213 20:29:13.134955    5568 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 13 20:29:13 old-k8s-version-613355 kubelet[5568]: W1213 20:29:13.135957    5568 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 13 20:29:13 old-k8s-version-613355 kubelet[5568]: I1213 20:29:13.136082    5568 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-613355 -n old-k8s-version-613355
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-613355 -n old-k8s-version-613355: exit status 2 (217.537299ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-613355" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (508.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:29:30.968183   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/default-k8s-diff-port-355668/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:29:41.597951   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:29:44.009473   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:30:23.337991   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:30:41.725953   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:30:55.165334   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:31:28.029779   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/no-preload-475934/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:31:47.105706   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/default-k8s-diff-port-355668/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:31:55.731846   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/no-preload-475934/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:32:14.810331   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/default-k8s-diff-port-355668/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:32:18.230278   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:32:27.499027   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:33:03.112182   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:33:24.127456   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:33:50.563126   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:34:01.060119   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:34:26.173956   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:34:41.597923   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:34:44.010290   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:34:47.192267   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:35:23.337600   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:35:24.123574   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:35:41.726588   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:35:55.165260   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:36:04.662832   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:36:28.029019   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/no-preload-475934/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:36:46.400438   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:36:47.106237   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/default-k8s-diff-port-355668/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:37:27.499803   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:37:47.085654   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:38:03.111902   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-613355 -n old-k8s-version-613355
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-613355 -n old-k8s-version-613355: exit status 2 (222.878325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-613355" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-613355 -n old-k8s-version-613355
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-613355 -n old-k8s-version-613355: exit status 2 (209.69728ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-613355 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-191190 image list                          | embed-certs-191190           | jenkins | v1.34.0 | 13 Dec 24 20:22 UTC | 13 Dec 24 20:22 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-191190                                  | embed-certs-191190           | jenkins | v1.34.0 | 13 Dec 24 20:22 UTC | 13 Dec 24 20:22 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-191190                                  | embed-certs-191190           | jenkins | v1.34.0 | 13 Dec 24 20:22 UTC | 13 Dec 24 20:22 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-191190                                  | embed-certs-191190           | jenkins | v1.34.0 | 13 Dec 24 20:22 UTC | 13 Dec 24 20:22 UTC |
	| delete  | -p embed-certs-191190                                  | embed-certs-191190           | jenkins | v1.34.0 | 13 Dec 24 20:22 UTC | 13 Dec 24 20:22 UTC |
	| start   | -p newest-cni-535459 --memory=2200 --alsologtostderr   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:22 UTC | 13 Dec 24 20:23 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-535459             | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:23 UTC | 13 Dec 24 20:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-535459                                   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:23 UTC | 13 Dec 24 20:23 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-535459                  | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:23 UTC | 13 Dec 24 20:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-535459 --memory=2200 --alsologtostderr   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:23 UTC | 13 Dec 24 20:24 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | no-preload-475934 image list                           | no-preload-475934            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-475934                                   | no-preload-475934            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-475934                                   | no-preload-475934            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| image   | newest-cni-535459 image list                           | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-535459                                   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-475934                                   | no-preload-475934            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	| delete  | -p no-preload-475934                                   | no-preload-475934            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	| unpause | -p newest-cni-535459                                   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-535459                                   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	| image   | default-k8s-diff-port-355668                           | default-k8s-diff-port-355668 | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-355668 | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | default-k8s-diff-port-355668                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-535459                                   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	| unpause | -p                                                     | default-k8s-diff-port-355668 | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | default-k8s-diff-port-355668                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-355668 | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | default-k8s-diff-port-355668                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-355668 | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | default-k8s-diff-port-355668                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 20:23:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 20:23:38.197995   79820 out.go:345] Setting OutFile to fd 1 ...
	I1213 20:23:38.198359   79820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:23:38.198412   79820 out.go:358] Setting ErrFile to fd 2...
	I1213 20:23:38.198430   79820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:23:38.198912   79820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
	I1213 20:23:38.199937   79820 out.go:352] Setting JSON to false
	I1213 20:23:38.200882   79820 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7561,"bootTime":1734113857,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 20:23:38.200969   79820 start.go:139] virtualization: kvm guest
	I1213 20:23:38.202746   79820 out.go:177] * [newest-cni-535459] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 20:23:38.204302   79820 notify.go:220] Checking for updates...
	I1213 20:23:38.204304   79820 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 20:23:38.205592   79820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 20:23:38.206687   79820 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:23:38.207863   79820 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 20:23:38.208920   79820 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 20:23:38.209928   79820 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 20:23:38.211390   79820 config.go:182] Loaded profile config "newest-cni-535459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:23:38.211789   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:38.211857   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:38.227106   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36295
	I1213 20:23:38.227528   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:38.228121   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:23:38.228141   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:38.228624   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:38.228802   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:38.229038   79820 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 20:23:38.229314   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:38.229353   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:38.244124   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I1213 20:23:38.244541   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:38.245118   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:23:38.245150   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:38.245472   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:38.245656   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:38.280882   79820 out.go:177] * Using the kvm2 driver based on existing profile
	I1213 20:23:38.282056   79820 start.go:297] selected driver: kvm2
	I1213 20:23:38.282071   79820 start.go:901] validating driver "kvm2" against &{Name:newest-cni-535459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:newest-cni-535459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s S
cheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:23:38.282177   79820 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 20:23:38.282946   79820 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 20:23:38.283023   79820 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20090-12353/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 20:23:38.297713   79820 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1213 20:23:38.298132   79820 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 20:23:38.298167   79820 cni.go:84] Creating CNI manager for ""
	I1213 20:23:38.298222   79820 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:23:38.298272   79820 start.go:340] cluster config:
	{Name:newest-cni-535459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-535459 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:23:38.298394   79820 iso.go:125] acquiring lock: {Name:mkd84f6661a5214d8c2d3a40ad448351a88bfd1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 20:23:38.299870   79820 out.go:177] * Starting "newest-cni-535459" primary control-plane node in "newest-cni-535459" cluster
	I1213 20:23:38.300922   79820 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 20:23:38.300954   79820 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1213 20:23:38.300961   79820 cache.go:56] Caching tarball of preloaded images
	I1213 20:23:38.301027   79820 preload.go:172] Found /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 20:23:38.301037   79820 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1213 20:23:38.301139   79820 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/config.json ...
	I1213 20:23:38.301353   79820 start.go:360] acquireMachinesLock for newest-cni-535459: {Name:mkc278ae0927dbec7538ca4f7c13001e5f3abc49 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 20:23:38.301405   79820 start.go:364] duration metric: took 31.317µs to acquireMachinesLock for "newest-cni-535459"
	I1213 20:23:38.301424   79820 start.go:96] Skipping create...Using existing machine configuration
	I1213 20:23:38.301434   79820 fix.go:54] fixHost starting: 
	I1213 20:23:38.301810   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:38.301846   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:38.316577   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46443
	I1213 20:23:38.317005   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:38.317449   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:23:38.317467   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:38.317793   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:38.317965   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:38.318117   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetState
	I1213 20:23:38.319590   79820 fix.go:112] recreateIfNeeded on newest-cni-535459: state=Stopped err=<nil>
	I1213 20:23:38.319614   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	W1213 20:23:38.319782   79820 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 20:23:38.321580   79820 out.go:177] * Restarting existing kvm2 VM for "newest-cni-535459" ...
	I1213 20:23:38.105462   77223 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.795842823s)
	I1213 20:23:38.105518   77223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:23:38.120268   77223 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 20:23:38.129684   77223 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:23:38.141849   77223 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:23:38.141869   77223 kubeadm.go:157] found existing configuration files:
	
	I1213 20:23:38.141910   77223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:23:38.150679   77223 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:23:38.150731   77223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:23:38.159954   77223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:23:38.168900   77223 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:23:38.168957   77223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:23:38.178775   77223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:23:38.187799   77223 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:23:38.187850   77223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:23:38.197158   77223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:23:38.206667   77223 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:23:38.206722   77223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:23:38.216276   77223 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 20:23:38.370967   77223 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 20:23:39.027955   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:39.041250   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:39.041315   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:39.083287   78367 cri.go:89] found id: ""
	I1213 20:23:39.083314   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.083324   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:39.083331   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:39.083384   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:39.125760   78367 cri.go:89] found id: ""
	I1213 20:23:39.125787   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.125798   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:39.125805   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:39.125857   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:39.159459   78367 cri.go:89] found id: ""
	I1213 20:23:39.159487   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.159497   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:39.159504   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:39.159557   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:39.194175   78367 cri.go:89] found id: ""
	I1213 20:23:39.194204   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.194211   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:39.194217   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:39.194265   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:39.228851   78367 cri.go:89] found id: ""
	I1213 20:23:39.228879   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.228889   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:39.228897   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:39.228948   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:39.266408   78367 cri.go:89] found id: ""
	I1213 20:23:39.266441   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.266452   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:39.266460   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:39.266505   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:39.303917   78367 cri.go:89] found id: ""
	I1213 20:23:39.303946   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.303957   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:39.303965   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:39.304024   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:39.337643   78367 cri.go:89] found id: ""
	I1213 20:23:39.337670   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.337680   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:39.337690   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:39.337707   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:39.394343   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:39.394375   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:39.411615   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:39.411645   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:39.484070   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:39.484095   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:39.484110   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:39.570207   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:39.570231   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:38.322621   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Start
	I1213 20:23:38.322783   79820 main.go:141] libmachine: (newest-cni-535459) starting domain...
	I1213 20:23:38.322806   79820 main.go:141] libmachine: (newest-cni-535459) ensuring networks are active...
	I1213 20:23:38.323533   79820 main.go:141] libmachine: (newest-cni-535459) Ensuring network default is active
	I1213 20:23:38.323827   79820 main.go:141] libmachine: (newest-cni-535459) Ensuring network mk-newest-cni-535459 is active
	I1213 20:23:38.324140   79820 main.go:141] libmachine: (newest-cni-535459) getting domain XML...
	I1213 20:23:38.324747   79820 main.go:141] libmachine: (newest-cni-535459) creating domain...
	I1213 20:23:39.564073   79820 main.go:141] libmachine: (newest-cni-535459) waiting for IP...
	I1213 20:23:39.565035   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:39.565551   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:39.565617   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:39.565533   79856 retry.go:31] will retry after 298.228952ms: waiting for domain to come up
	I1213 20:23:39.865149   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:39.865713   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:39.865742   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:39.865696   79856 retry.go:31] will retry after 251.6627ms: waiting for domain to come up
	I1213 20:23:40.119294   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:40.119854   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:40.119884   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:40.119834   79856 retry.go:31] will retry after 300.482126ms: waiting for domain to come up
	I1213 20:23:40.422534   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:40.423263   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:40.423290   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:40.423228   79856 retry.go:31] will retry after 512.35172ms: waiting for domain to come up
	I1213 20:23:40.936920   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:40.937508   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:40.937541   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:40.937492   79856 retry.go:31] will retry after 706.292926ms: waiting for domain to come up
	I1213 20:23:41.645625   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:41.646229   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:41.646365   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:41.646289   79856 retry.go:31] will retry after 925.304714ms: waiting for domain to come up
	I1213 20:23:42.572832   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:42.573505   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:42.573551   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:42.573492   79856 retry.go:31] will retry after 784.905312ms: waiting for domain to come up
	I1213 20:23:44.821257   77510 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.710060568s)
	I1213 20:23:44.821343   77510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:23:44.851774   77510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 20:23:44.867597   77510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:23:44.882988   77510 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:23:44.883012   77510 kubeadm.go:157] found existing configuration files:
	
	I1213 20:23:44.883061   77510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1213 20:23:44.897859   77510 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:23:44.897930   77510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:23:44.930490   77510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1213 20:23:44.940775   77510 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:23:44.940832   77510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:23:44.949814   77510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1213 20:23:44.958792   77510 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:23:44.958864   77510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:23:44.967799   77510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1213 20:23:44.976918   77510 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:23:44.976978   77510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:23:44.985827   77510 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 20:23:45.032679   77510 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1213 20:23:45.032823   77510 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 20:23:45.154457   77510 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 20:23:45.154613   77510 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 20:23:45.154753   77510 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 20:23:45.168560   77510 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 20:23:45.170392   77510 out.go:235]   - Generating certificates and keys ...
	I1213 20:23:45.170484   77510 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 20:23:45.170567   77510 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 20:23:45.170671   77510 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 20:23:45.170773   77510 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1213 20:23:45.170895   77510 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 20:23:45.175078   77510 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1213 20:23:45.175301   77510 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1213 20:23:45.175631   77510 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1213 20:23:45.175826   77510 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 20:23:45.176621   77510 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 20:23:45.176938   77510 kubeadm.go:310] [certs] Using the existing "sa" key
	I1213 20:23:45.177096   77510 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 20:23:45.425420   77510 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 20:23:45.744337   77510 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 20:23:46.051697   77510 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 20:23:46.134768   77510 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 20:23:46.244436   77510 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 20:23:46.245253   77510 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 20:23:46.248609   77510 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 20:23:46.425197   77223 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1213 20:23:46.425300   77223 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 20:23:46.425412   77223 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 20:23:46.425543   77223 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 20:23:46.425669   77223 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 20:23:46.425751   77223 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 20:23:46.427622   77223 out.go:235]   - Generating certificates and keys ...
	I1213 20:23:46.427725   77223 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 20:23:46.427829   77223 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 20:23:46.427918   77223 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 20:23:46.428011   77223 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1213 20:23:46.428119   77223 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 20:23:46.428197   77223 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1213 20:23:46.428286   77223 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1213 20:23:46.428363   77223 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1213 20:23:46.428447   77223 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 20:23:46.428558   77223 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 20:23:46.428626   77223 kubeadm.go:310] [certs] Using the existing "sa" key
	I1213 20:23:46.428704   77223 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 20:23:46.428791   77223 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 20:23:46.428896   77223 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 20:23:46.428988   77223 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 20:23:46.429081   77223 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 20:23:46.429176   77223 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 20:23:46.429297   77223 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 20:23:46.429377   77223 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 20:23:46.430801   77223 out.go:235]   - Booting up control plane ...
	I1213 20:23:46.430919   77223 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 20:23:46.431003   77223 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 20:23:46.431082   77223 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 20:23:46.431200   77223 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 20:23:46.431334   77223 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 20:23:46.431408   77223 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 20:23:46.431609   77223 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 20:23:46.431761   77223 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 20:23:46.431850   77223 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.304495ms
	I1213 20:23:46.432010   77223 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1213 20:23:46.432103   77223 kubeadm.go:310] [api-check] The API server is healthy after 5.002258285s
	I1213 20:23:46.432266   77223 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 20:23:46.432423   77223 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 20:23:46.432498   77223 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 20:23:46.432678   77223 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-475934 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 20:23:46.432749   77223 kubeadm.go:310] [bootstrap-token] Using token: ztynho.1kbaokhemrbxet6k
	I1213 20:23:46.434022   77223 out.go:235]   - Configuring RBAC rules ...
	I1213 20:23:46.434143   77223 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 20:23:46.434228   77223 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 20:23:46.434361   77223 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 20:23:46.434498   77223 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 20:23:46.434622   77223 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 20:23:46.434723   77223 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 20:23:46.434870   77223 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 20:23:46.434940   77223 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1213 20:23:46.435004   77223 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1213 20:23:46.435013   77223 kubeadm.go:310] 
	I1213 20:23:46.435096   77223 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1213 20:23:46.435109   77223 kubeadm.go:310] 
	I1213 20:23:46.435171   77223 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1213 20:23:46.435177   77223 kubeadm.go:310] 
	I1213 20:23:46.435197   77223 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1213 20:23:46.435248   77223 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 20:23:46.435294   77223 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 20:23:46.435300   77223 kubeadm.go:310] 
	I1213 20:23:46.435352   77223 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1213 20:23:46.435363   77223 kubeadm.go:310] 
	I1213 20:23:46.435402   77223 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 20:23:46.435408   77223 kubeadm.go:310] 
	I1213 20:23:46.435455   77223 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1213 20:23:46.435519   77223 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 20:23:46.435617   77223 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 20:23:46.435639   77223 kubeadm.go:310] 
	I1213 20:23:46.435750   77223 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 20:23:46.435854   77223 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1213 20:23:46.435869   77223 kubeadm.go:310] 
	I1213 20:23:46.435980   77223 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ztynho.1kbaokhemrbxet6k \
	I1213 20:23:46.436148   77223 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b927cc699f96ad11d9aa77520496913d5873f96a2e411ce1bcbe6def5a1747ad \
	I1213 20:23:46.436179   77223 kubeadm.go:310] 	--control-plane 
	I1213 20:23:46.436189   77223 kubeadm.go:310] 
	I1213 20:23:46.436310   77223 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1213 20:23:46.436321   77223 kubeadm.go:310] 
	I1213 20:23:46.436460   77223 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ztynho.1kbaokhemrbxet6k \
	I1213 20:23:46.436635   77223 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b927cc699f96ad11d9aa77520496913d5873f96a2e411ce1bcbe6def5a1747ad 
	I1213 20:23:46.436652   77223 cni.go:84] Creating CNI manager for ""
	I1213 20:23:46.436659   77223 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:23:46.438047   77223 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 20:23:42.109283   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:42.126005   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:42.126094   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:42.169463   78367 cri.go:89] found id: ""
	I1213 20:23:42.169494   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.169505   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:42.169512   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:42.169573   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:42.214207   78367 cri.go:89] found id: ""
	I1213 20:23:42.214237   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.214248   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:42.214265   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:42.214327   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:42.255998   78367 cri.go:89] found id: ""
	I1213 20:23:42.256030   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.256041   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:42.256049   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:42.256104   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:42.295578   78367 cri.go:89] found id: ""
	I1213 20:23:42.295607   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.295618   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:42.295625   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:42.295686   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:42.336462   78367 cri.go:89] found id: ""
	I1213 20:23:42.336489   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.336501   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:42.336509   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:42.336568   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:42.377959   78367 cri.go:89] found id: ""
	I1213 20:23:42.377987   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.377998   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:42.378020   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:42.378083   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:42.421761   78367 cri.go:89] found id: ""
	I1213 20:23:42.421790   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.421799   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:42.421807   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:42.421866   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:42.456346   78367 cri.go:89] found id: ""
	I1213 20:23:42.456373   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.456387   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:42.456397   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:42.456411   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:42.472200   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:42.472241   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:42.544913   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:42.544938   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:42.544954   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:42.646820   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:42.646869   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:42.685374   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:42.685411   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:45.244342   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:45.257131   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:45.257210   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:45.291023   78367 cri.go:89] found id: ""
	I1213 20:23:45.291064   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.291072   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:45.291085   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:45.291145   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:45.322469   78367 cri.go:89] found id: ""
	I1213 20:23:45.322499   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.322509   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:45.322516   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:45.322574   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:45.364647   78367 cri.go:89] found id: ""
	I1213 20:23:45.364679   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.364690   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:45.364696   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:45.364754   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:45.406124   78367 cri.go:89] found id: ""
	I1213 20:23:45.406151   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.406161   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:45.406169   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:45.406229   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:45.449418   78367 cri.go:89] found id: ""
	I1213 20:23:45.449442   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.449450   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:45.449456   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:45.449513   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:45.491190   78367 cri.go:89] found id: ""
	I1213 20:23:45.491221   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.491231   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:45.491239   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:45.491312   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:45.537336   78367 cri.go:89] found id: ""
	I1213 20:23:45.537365   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.537375   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:45.537383   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:45.537442   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:45.574826   78367 cri.go:89] found id: ""
	I1213 20:23:45.574873   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.574884   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:45.574897   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:45.574911   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:45.656859   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:45.656900   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:45.671183   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:45.671211   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:45.748645   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:45.748670   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:45.748684   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:45.861549   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:45.861598   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:43.360177   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:43.360711   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:43.360749   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:43.360702   79856 retry.go:31] will retry after 910.256009ms: waiting for domain to come up
	I1213 20:23:44.272014   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:44.272526   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:44.272555   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:44.272488   79856 retry.go:31] will retry after 1.534434138s: waiting for domain to come up
	I1213 20:23:45.809190   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:45.809761   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:45.809786   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:45.809755   79856 retry.go:31] will retry after 2.307546799s: waiting for domain to come up
	I1213 20:23:48.120134   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:48.120663   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:48.120688   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:48.120620   79856 retry.go:31] will retry after 2.815296829s: waiting for domain to come up
	I1213 20:23:46.250264   77510 out.go:235]   - Booting up control plane ...
	I1213 20:23:46.250387   77510 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 20:23:46.250522   77510 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 20:23:46.250655   77510 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 20:23:46.274127   77510 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 20:23:46.280501   77510 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 20:23:46.280570   77510 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 20:23:46.407152   77510 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 20:23:46.407342   77510 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 20:23:46.909234   77510 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.289561ms
	I1213 20:23:46.909341   77510 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1213 20:23:46.439167   77223 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 20:23:46.452642   77223 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 20:23:46.478384   77223 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 20:23:46.478435   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:46.478467   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-475934 minikube.k8s.io/updated_at=2024_12_13T20_23_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956 minikube.k8s.io/name=no-preload-475934 minikube.k8s.io/primary=true
	I1213 20:23:46.497425   77223 ops.go:34] apiserver oom_adj: -16
	I1213 20:23:46.697773   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:47.198632   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:47.697921   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:48.198923   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:48.697941   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:49.198682   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:49.698572   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:50.198476   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:50.698077   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:50.793538   77223 kubeadm.go:1113] duration metric: took 4.315156477s to wait for elevateKubeSystemPrivileges
	I1213 20:23:50.793579   77223 kubeadm.go:394] duration metric: took 5m1.991513079s to StartCluster
	I1213 20:23:50.793600   77223 settings.go:142] acquiring lock: {Name:mkc90da34b53323b31b6e69f8fab5ad7b1bdb254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:23:50.793686   77223 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:23:50.795098   77223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/kubeconfig: {Name:mkeeacf16d2513309766df13b67a96dd252bc4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:23:50.795375   77223 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.128 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 20:23:50.795446   77223 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 20:23:50.795546   77223 addons.go:69] Setting storage-provisioner=true in profile "no-preload-475934"
	I1213 20:23:50.795565   77223 addons.go:234] Setting addon storage-provisioner=true in "no-preload-475934"
	W1213 20:23:50.795574   77223 addons.go:243] addon storage-provisioner should already be in state true
	I1213 20:23:50.795605   77223 host.go:66] Checking if "no-preload-475934" exists ...
	I1213 20:23:50.795621   77223 config.go:182] Loaded profile config "no-preload-475934": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:23:50.795673   77223 addons.go:69] Setting default-storageclass=true in profile "no-preload-475934"
	I1213 20:23:50.795698   77223 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-475934"
	I1213 20:23:50.796066   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.796080   77223 addons.go:69] Setting dashboard=true in profile "no-preload-475934"
	I1213 20:23:50.796098   77223 addons.go:234] Setting addon dashboard=true in "no-preload-475934"
	I1213 20:23:50.796100   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	W1213 20:23:50.796105   77223 addons.go:243] addon dashboard should already be in state true
	I1213 20:23:50.796129   77223 host.go:66] Checking if "no-preload-475934" exists ...
	I1213 20:23:50.796167   77223 addons.go:69] Setting metrics-server=true in profile "no-preload-475934"
	I1213 20:23:50.796187   77223 addons.go:234] Setting addon metrics-server=true in "no-preload-475934"
	W1213 20:23:50.796195   77223 addons.go:243] addon metrics-server should already be in state true
	I1213 20:23:50.796223   77223 host.go:66] Checking if "no-preload-475934" exists ...
	I1213 20:23:50.796066   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.796371   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.796476   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.796502   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.796625   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.796665   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.802558   77223 out.go:177] * Verifying Kubernetes components...
	I1213 20:23:50.804240   77223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:23:50.815506   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40753
	I1213 20:23:50.815508   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43379
	I1213 20:23:50.815849   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I1213 20:23:50.816023   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.816131   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.816355   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.816463   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42859
	I1213 20:23:50.816587   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.816610   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.816711   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.816731   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.816857   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.816968   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.817049   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.817074   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.817091   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.817187   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetState
	I1213 20:23:50.817334   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.817353   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.817814   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.817854   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.818079   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.818094   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.818681   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.818685   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.818721   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.818756   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.839237   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36455
	I1213 20:23:50.855736   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.856284   77223 addons.go:234] Setting addon default-storageclass=true in "no-preload-475934"
	W1213 20:23:50.856308   77223 addons.go:243] addon default-storageclass should already be in state true
	I1213 20:23:50.856341   77223 host.go:66] Checking if "no-preload-475934" exists ...
	I1213 20:23:50.856381   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.856404   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.856715   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.856733   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.856757   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.857004   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetState
	I1213 20:23:50.859133   77223 main.go:141] libmachine: (no-preload-475934) Calling .DriverName
	I1213 20:23:50.861074   77223 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 20:23:50.862375   77223 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1213 20:23:50.863494   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 20:23:50.863514   77223 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 20:23:50.863535   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHHostname
	I1213 20:23:50.874249   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHPort
	I1213 20:23:50.874355   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.874381   77223 main.go:141] libmachine: (no-preload-475934) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a1:3e", ip: ""} in network mk-no-preload-475934: {Iface:virbr4 ExpiryTime:2024-12-13 21:18:22 +0000 UTC Type:0 Mac:52:54:00:b3:a1:3e Iaid: IPaddr:192.168.61.128 Prefix:24 Hostname:no-preload-475934 Clientid:01:52:54:00:b3:a1:3e}
	I1213 20:23:50.874406   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined IP address 192.168.61.128 and MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.874481   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHKeyPath
	I1213 20:23:50.874755   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHUsername
	I1213 20:23:50.875083   77223 sshutil.go:53] new ssh client: &{IP:192.168.61.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/no-preload-475934/id_rsa Username:docker}
	I1213 20:23:50.876889   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I1213 20:23:50.876927   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45279
	I1213 20:23:50.877256   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39049
	I1213 20:23:50.877531   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.877577   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.877899   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.878141   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.878154   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.878167   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.878170   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.878413   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.878435   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.878483   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.878527   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.878869   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetState
	I1213 20:23:50.878879   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.878893   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetState
	I1213 20:23:50.879461   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.879507   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.880758   77223 main.go:141] libmachine: (no-preload-475934) Calling .DriverName
	I1213 20:23:50.881011   77223 main.go:141] libmachine: (no-preload-475934) Calling .DriverName
	I1213 20:23:50.882329   77223 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:23:50.882392   77223 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 20:23:50.883529   77223 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:23:50.883551   77223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 20:23:50.883911   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHHostname
	I1213 20:23:50.884480   77223 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 20:23:50.884501   77223 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 20:23:50.884518   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHHostname
	I1213 20:23:50.888177   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.888302   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.888537   77223 main.go:141] libmachine: (no-preload-475934) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a1:3e", ip: ""} in network mk-no-preload-475934: {Iface:virbr4 ExpiryTime:2024-12-13 21:18:22 +0000 UTC Type:0 Mac:52:54:00:b3:a1:3e Iaid: IPaddr:192.168.61.128 Prefix:24 Hostname:no-preload-475934 Clientid:01:52:54:00:b3:a1:3e}
	I1213 20:23:50.888583   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined IP address 192.168.61.128 and MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.888850   77223 main.go:141] libmachine: (no-preload-475934) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a1:3e", ip: ""} in network mk-no-preload-475934: {Iface:virbr4 ExpiryTime:2024-12-13 21:18:22 +0000 UTC Type:0 Mac:52:54:00:b3:a1:3e Iaid: IPaddr:192.168.61.128 Prefix:24 Hostname:no-preload-475934 Clientid:01:52:54:00:b3:a1:3e}
	I1213 20:23:50.888867   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHPort
	I1213 20:23:50.888870   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined IP address 192.168.61.128 and MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.889051   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHPort
	I1213 20:23:50.889070   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHKeyPath
	I1213 20:23:50.889186   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHUsername
	I1213 20:23:50.889244   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHKeyPath
	I1213 20:23:50.889291   77223 sshutil.go:53] new ssh client: &{IP:192.168.61.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/no-preload-475934/id_rsa Username:docker}
	I1213 20:23:50.889578   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHUsername
	I1213 20:23:50.889741   77223 sshutil.go:53] new ssh client: &{IP:192.168.61.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/no-preload-475934/id_rsa Username:docker}
	I1213 20:23:50.900416   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I1213 20:23:50.904150   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.904681   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.904710   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.905101   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.905353   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetState
	I1213 20:23:50.907076   77223 main.go:141] libmachine: (no-preload-475934) Calling .DriverName
	I1213 20:23:50.907309   77223 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 20:23:50.907327   77223 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 20:23:50.907346   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHHostname
	I1213 20:23:50.913266   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.913676   77223 main.go:141] libmachine: (no-preload-475934) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a1:3e", ip: ""} in network mk-no-preload-475934: {Iface:virbr4 ExpiryTime:2024-12-13 21:18:22 +0000 UTC Type:0 Mac:52:54:00:b3:a1:3e Iaid: IPaddr:192.168.61.128 Prefix:24 Hostname:no-preload-475934 Clientid:01:52:54:00:b3:a1:3e}
	I1213 20:23:50.913698   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined IP address 192.168.61.128 and MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.913923   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHPort
	I1213 20:23:50.914129   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHKeyPath
	I1213 20:23:50.914296   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHUsername
	I1213 20:23:50.914481   77223 sshutil.go:53] new ssh client: &{IP:192.168.61.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/no-preload-475934/id_rsa Username:docker}
	I1213 20:23:51.062632   77223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 20:23:51.080757   77223 node_ready.go:35] waiting up to 6m0s for node "no-preload-475934" to be "Ready" ...
	I1213 20:23:51.096457   77223 node_ready.go:49] node "no-preload-475934" has status "Ready":"True"
	I1213 20:23:51.096488   77223 node_ready.go:38] duration metric: took 15.695926ms for node "no-preload-475934" to be "Ready" ...
	I1213 20:23:51.096501   77223 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 20:23:51.101069   77223 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:51.153214   77223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 20:23:51.201828   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 20:23:51.201861   77223 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 20:23:51.257276   77223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:23:51.286719   77223 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 20:23:51.286743   77223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 20:23:48.414982   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:48.431396   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:48.431482   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:48.476067   78367 cri.go:89] found id: ""
	I1213 20:23:48.476112   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.476124   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:48.476131   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:48.476194   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:48.517216   78367 cri.go:89] found id: ""
	I1213 20:23:48.517258   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.517269   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:48.517277   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:48.517381   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:48.562993   78367 cri.go:89] found id: ""
	I1213 20:23:48.563092   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.563117   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:48.563135   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:48.563223   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:48.604109   78367 cri.go:89] found id: ""
	I1213 20:23:48.604202   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.604224   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:48.604250   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:48.604348   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:48.651185   78367 cri.go:89] found id: ""
	I1213 20:23:48.651219   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.651230   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:48.651238   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:48.651317   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:48.695266   78367 cri.go:89] found id: ""
	I1213 20:23:48.695305   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.695317   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:48.695325   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:48.695389   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:48.741459   78367 cri.go:89] found id: ""
	I1213 20:23:48.741495   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.741506   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:48.741513   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:48.741573   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:48.785599   78367 cri.go:89] found id: ""
	I1213 20:23:48.785684   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.785701   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:48.785716   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:48.785744   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:48.845741   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:48.845777   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:48.862971   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:48.863013   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:48.934300   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:48.934328   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:48.934344   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:49.023110   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:49.023154   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:51.562149   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:51.580078   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:51.580154   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:51.624644   78367 cri.go:89] found id: ""
	I1213 20:23:51.624677   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.624688   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:51.624696   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:51.624756   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:51.910904   77510 kubeadm.go:310] [api-check] The API server is healthy after 5.001533218s
	I1213 20:23:51.928221   77510 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 20:23:51.955180   77510 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 20:23:51.988925   77510 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 20:23:51.989201   77510 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-355668 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 20:23:52.006352   77510 kubeadm.go:310] [bootstrap-token] Using token: 62dvzj.gok594hxuxcynd4x
	I1213 20:23:50.939565   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:50.940051   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:50.940081   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:50.940008   79856 retry.go:31] will retry after 2.96641877s: waiting for domain to come up
	I1213 20:23:51.311455   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 20:23:51.311485   77223 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 20:23:51.369375   77223 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 20:23:51.369403   77223 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 20:23:51.424081   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 20:23:51.424111   77223 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 20:23:51.425876   77223 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 20:23:51.425896   77223 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 20:23:51.467889   77223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 20:23:51.513308   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 20:23:51.513340   77223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 20:23:51.601978   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 20:23:51.602009   77223 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 20:23:51.627122   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:51.627201   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:51.627580   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:51.629153   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:51.629172   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:51.629183   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:51.629191   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:51.629445   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:51.629463   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:51.629473   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:51.641253   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:51.641282   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:51.641576   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:51.641592   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:51.641593   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:51.656503   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 20:23:51.656529   77223 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 20:23:51.736524   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 20:23:51.736554   77223 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 20:23:51.766699   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 20:23:51.766786   77223 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 20:23:51.801572   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 20:23:51.801601   77223 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 20:23:51.819179   77223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 20:23:52.110163   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:52.110190   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:52.110480   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:52.110500   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:52.110507   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:52.110514   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:52.110508   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:52.113643   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:52.113667   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:52.113674   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:52.551336   77223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.08338913s)
	I1213 20:23:52.551397   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:52.551410   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:52.551700   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:52.551721   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:52.551731   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:52.551739   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:52.551951   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:52.552000   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:52.552008   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:52.552025   77223 addons.go:475] Verifying addon metrics-server=true in "no-preload-475934"
	I1213 20:23:53.145015   77223 pod_ready.go:103] pod "etcd-no-preload-475934" in "kube-system" namespace has status "Ready":"False"
	I1213 20:23:53.262929   77223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.44371085s)
	I1213 20:23:53.262987   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:53.263007   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:53.263335   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:53.263355   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:53.263365   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:53.263373   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:53.263380   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:53.263640   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:53.263680   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:53.263688   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:53.265176   77223 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-475934 addons enable metrics-server
	
	I1213 20:23:53.266358   77223 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1213 20:23:52.007746   77510 out.go:235]   - Configuring RBAC rules ...
	I1213 20:23:52.007914   77510 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 20:23:52.022398   77510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 20:23:52.033846   77510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 20:23:52.038811   77510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 20:23:52.052112   77510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 20:23:52.068899   77510 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 20:23:52.319919   77510 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 20:23:52.804645   77510 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1213 20:23:53.320002   77510 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1213 20:23:53.321529   77510 kubeadm.go:310] 
	I1213 20:23:53.321648   77510 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1213 20:23:53.321684   77510 kubeadm.go:310] 
	I1213 20:23:53.321797   77510 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1213 20:23:53.321809   77510 kubeadm.go:310] 
	I1213 20:23:53.321843   77510 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1213 20:23:53.321931   77510 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 20:23:53.322014   77510 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 20:23:53.322039   77510 kubeadm.go:310] 
	I1213 20:23:53.322140   77510 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1213 20:23:53.322154   77510 kubeadm.go:310] 
	I1213 20:23:53.322237   77510 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 20:23:53.322253   77510 kubeadm.go:310] 
	I1213 20:23:53.322327   77510 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1213 20:23:53.322439   77510 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 20:23:53.322505   77510 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 20:23:53.322511   77510 kubeadm.go:310] 
	I1213 20:23:53.322642   77510 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 20:23:53.322757   77510 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1213 20:23:53.322771   77510 kubeadm.go:310] 
	I1213 20:23:53.322937   77510 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 62dvzj.gok594hxuxcynd4x \
	I1213 20:23:53.323079   77510 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b927cc699f96ad11d9aa77520496913d5873f96a2e411ce1bcbe6def5a1747ad \
	I1213 20:23:53.323132   77510 kubeadm.go:310] 	--control-plane 
	I1213 20:23:53.323149   77510 kubeadm.go:310] 
	I1213 20:23:53.323269   77510 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1213 20:23:53.323280   77510 kubeadm.go:310] 
	I1213 20:23:53.323407   77510 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 62dvzj.gok594hxuxcynd4x \
	I1213 20:23:53.323556   77510 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b927cc699f96ad11d9aa77520496913d5873f96a2e411ce1bcbe6def5a1747ad 
	I1213 20:23:53.324551   77510 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 20:23:53.324579   77510 cni.go:84] Creating CNI manager for ""
	I1213 20:23:53.324591   77510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:23:53.326071   77510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 20:23:53.327260   77510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 20:23:53.338245   77510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 20:23:53.359781   77510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 20:23:53.359954   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:53.360067   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-355668 minikube.k8s.io/updated_at=2024_12_13T20_23_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956 minikube.k8s.io/name=default-k8s-diff-port-355668 minikube.k8s.io/primary=true
	I1213 20:23:53.378620   77510 ops.go:34] apiserver oom_adj: -16
	I1213 20:23:53.595107   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:54.095889   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:54.596033   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:53.267500   77223 addons.go:510] duration metric: took 2.472063966s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1213 20:23:55.608441   77223 pod_ready.go:103] pod "etcd-no-preload-475934" in "kube-system" namespace has status "Ready":"False"
	I1213 20:23:51.673392   78367 cri.go:89] found id: ""
	I1213 20:23:51.673421   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.673432   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:51.673440   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:51.673501   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:51.721445   78367 cri.go:89] found id: ""
	I1213 20:23:51.721472   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.721480   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:51.721488   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:51.721544   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:51.755079   78367 cri.go:89] found id: ""
	I1213 20:23:51.755112   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.755123   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:51.755131   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:51.755194   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:51.796420   78367 cri.go:89] found id: ""
	I1213 20:23:51.796457   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.796470   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:51.796478   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:51.796542   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:51.830054   78367 cri.go:89] found id: ""
	I1213 20:23:51.830080   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.830090   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:51.830098   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:51.830153   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:51.867546   78367 cri.go:89] found id: ""
	I1213 20:23:51.867574   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.867584   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:51.867592   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:51.867653   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:51.911804   78367 cri.go:89] found id: ""
	I1213 20:23:51.911830   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.911841   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:51.911853   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:51.911867   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:51.981311   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:51.981340   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:51.997948   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:51.997995   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:52.078493   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:52.078526   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:52.078541   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:52.181165   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:52.181213   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:54.728341   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:54.742062   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:54.742122   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:54.779920   78367 cri.go:89] found id: ""
	I1213 20:23:54.779947   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.779958   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:54.779966   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:54.780021   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:54.813600   78367 cri.go:89] found id: ""
	I1213 20:23:54.813631   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.813641   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:54.813649   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:54.813711   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:54.846731   78367 cri.go:89] found id: ""
	I1213 20:23:54.846761   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.846771   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:54.846778   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:54.846837   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:54.878598   78367 cri.go:89] found id: ""
	I1213 20:23:54.878628   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.878638   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:54.878646   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:54.878706   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:54.914259   78367 cri.go:89] found id: ""
	I1213 20:23:54.914293   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.914304   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:54.914318   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:54.914383   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:54.947232   78367 cri.go:89] found id: ""
	I1213 20:23:54.947264   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.947275   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:54.947283   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:54.947350   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:54.992079   78367 cri.go:89] found id: ""
	I1213 20:23:54.992108   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.992118   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:54.992125   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:54.992184   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:55.035067   78367 cri.go:89] found id: ""
	I1213 20:23:55.035093   78367 logs.go:282] 0 containers: []
	W1213 20:23:55.035100   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:55.035109   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:55.035122   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:55.108198   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:55.108224   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:55.108238   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:55.197303   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:55.197333   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:55.248131   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:55.248154   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:55.301605   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:55.301635   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:53.907724   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:53.908424   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:53.908470   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:53.908391   79856 retry.go:31] will retry after 4.35778362s: waiting for domain to come up
	I1213 20:23:55.095857   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:55.595908   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:56.095409   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:56.595238   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:57.095945   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:57.595757   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:58.095963   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:58.198049   77510 kubeadm.go:1113] duration metric: took 4.838144553s to wait for elevateKubeSystemPrivileges
	I1213 20:23:58.198082   77510 kubeadm.go:394] duration metric: took 5m1.770847274s to StartCluster
	I1213 20:23:58.198102   77510 settings.go:142] acquiring lock: {Name:mkc90da34b53323b31b6e69f8fab5ad7b1bdb254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:23:58.198176   77510 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:23:58.199549   77510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/kubeconfig: {Name:mkeeacf16d2513309766df13b67a96dd252bc4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:23:58.199800   77510 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.233 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 20:23:58.199963   77510 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 20:23:58.200086   77510 config.go:182] Loaded profile config "default-k8s-diff-port-355668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:23:58.200131   77510 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-355668"
	I1213 20:23:58.200150   77510 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-355668"
	W1213 20:23:58.200166   77510 addons.go:243] addon storage-provisioner should already be in state true
	I1213 20:23:58.200189   77510 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-355668"
	I1213 20:23:58.200199   77510 host.go:66] Checking if "default-k8s-diff-port-355668" exists ...
	I1213 20:23:58.200211   77510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-355668"
	I1213 20:23:58.200610   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.200626   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.200639   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.200656   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.200712   77510 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-355668"
	I1213 20:23:58.200712   77510 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-355668"
	I1213 20:23:58.200725   77510 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-355668"
	W1213 20:23:58.200732   77510 addons.go:243] addon dashboard should already be in state true
	I1213 20:23:58.200733   77510 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-355668"
	W1213 20:23:58.200742   77510 addons.go:243] addon metrics-server should already be in state true
	I1213 20:23:58.200754   77510 host.go:66] Checking if "default-k8s-diff-port-355668" exists ...
	I1213 20:23:58.200771   77510 host.go:66] Checking if "default-k8s-diff-port-355668" exists ...
	I1213 20:23:58.205916   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.205937   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.205960   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.205976   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.206755   77510 out.go:177] * Verifying Kubernetes components...
	I1213 20:23:58.208075   77510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:23:58.223074   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35975
	I1213 20:23:58.223694   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.224155   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.224170   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.224674   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.224863   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetState
	I1213 20:23:58.226583   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45263
	I1213 20:23:58.227150   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.227693   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.227712   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.228163   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.228437   77510 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-355668"
	W1213 20:23:58.228457   77510 addons.go:243] addon default-storageclass should already be in state true
	I1213 20:23:58.228483   77510 host.go:66] Checking if "default-k8s-diff-port-355668" exists ...
	I1213 20:23:58.228838   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.228847   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.228871   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.228882   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.238833   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35317
	I1213 20:23:58.245605   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44253
	I1213 20:23:58.246100   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.246630   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.246648   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.247050   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.247623   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.247662   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.249751   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46067
	I1213 20:23:58.250222   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.250772   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.250789   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.254939   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I1213 20:23:58.254977   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.254944   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.255395   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetState
	I1213 20:23:58.255455   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.255928   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.255944   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.256275   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.256811   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.256843   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.258976   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .DriverName
	I1213 20:23:58.259498   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.259515   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.260075   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.260720   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.260752   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.261030   77510 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 20:23:58.262210   77510 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 20:23:58.262229   77510 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 20:23:58.262248   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHHostname
	I1213 20:23:58.265414   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.266021   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:ab:46", ip: ""} in network mk-default-k8s-diff-port-355668: {Iface:virbr1 ExpiryTime:2024-12-13 21:18:42 +0000 UTC Type:0 Mac:52:54:00:22:ab:46 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:default-k8s-diff-port-355668 Clientid:01:52:54:00:22:ab:46}
	I1213 20:23:58.266045   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined IP address 192.168.39.233 and MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.266278   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHPort
	I1213 20:23:58.266441   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHKeyPath
	I1213 20:23:58.266627   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHUsername
	I1213 20:23:58.266776   77510 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/default-k8s-diff-port-355668/id_rsa Username:docker}
	I1213 20:23:58.268367   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32839
	I1213 20:23:58.269174   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.270087   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.270108   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.270905   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.271343   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetState
	I1213 20:23:58.278504   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41687
	I1213 20:23:58.279047   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.279669   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.279685   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.280236   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.280583   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetState
	I1213 20:23:58.281949   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43145
	I1213 20:23:58.282310   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.283003   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.283020   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.283408   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.286964   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .DriverName
	I1213 20:23:58.286998   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .DriverName
	I1213 20:23:58.287032   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetState
	I1213 20:23:58.287233   77510 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 20:23:58.287250   77510 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 20:23:58.287276   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHHostname
	I1213 20:23:58.288987   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .DriverName
	I1213 20:23:58.289809   77510 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 20:23:58.290685   77510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:23:58.292753   77510 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:23:58.292774   77510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 20:23:58.292792   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHHostname
	I1213 20:23:58.292849   77510 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1213 20:23:56.611155   77223 pod_ready.go:93] pod "etcd-no-preload-475934" in "kube-system" namespace has status "Ready":"True"
	I1213 20:23:56.611190   77223 pod_ready.go:82] duration metric: took 5.510087654s for pod "etcd-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:56.611203   77223 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:57.116912   77223 pod_ready.go:93] pod "kube-apiserver-no-preload-475934" in "kube-system" namespace has status "Ready":"True"
	I1213 20:23:57.116945   77223 pod_ready.go:82] duration metric: took 505.733979ms for pod "kube-apiserver-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:57.116958   77223 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:57.121384   77223 pod_ready.go:93] pod "kube-controller-manager-no-preload-475934" in "kube-system" namespace has status "Ready":"True"
	I1213 20:23:57.121411   77223 pod_ready.go:82] duration metric: took 4.445498ms for pod "kube-controller-manager-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:57.121425   77223 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:59.129454   77223 pod_ready.go:103] pod "kube-scheduler-no-preload-475934" in "kube-system" namespace has status "Ready":"False"
	I1213 20:23:59.662780   77223 pod_ready.go:93] pod "kube-scheduler-no-preload-475934" in "kube-system" namespace has status "Ready":"True"
	I1213 20:23:59.662813   77223 pod_ready.go:82] duration metric: took 2.541378671s for pod "kube-scheduler-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:59.662828   77223 pod_ready.go:39] duration metric: took 8.566311765s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 20:23:59.662869   77223 api_server.go:52] waiting for apiserver process to appear ...
	I1213 20:23:59.662936   77223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:59.685691   77223 api_server.go:72] duration metric: took 8.890275631s to wait for apiserver process to appear ...
	I1213 20:23:59.685722   77223 api_server.go:88] waiting for apiserver healthz status ...
	I1213 20:23:59.685743   77223 api_server.go:253] Checking apiserver healthz at https://192.168.61.128:8443/healthz ...
	I1213 20:23:59.692539   77223 api_server.go:279] https://192.168.61.128:8443/healthz returned 200:
	ok
	I1213 20:23:59.694289   77223 api_server.go:141] control plane version: v1.31.2
	I1213 20:23:59.694317   77223 api_server.go:131] duration metric: took 8.58708ms to wait for apiserver health ...
	I1213 20:23:59.694327   77223 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 20:23:59.703648   77223 system_pods.go:59] 9 kube-system pods found
	I1213 20:23:59.703682   77223 system_pods.go:61] "coredns-7c65d6cfc9-gksk2" [2099250f-c8ad-4c8d-b5da-9468b16e90de] Running
	I1213 20:23:59.703691   77223 system_pods.go:61] "coredns-7c65d6cfc9-gl527" [974ba38b-6931-4e46-aece-5b72bffab803] Running
	I1213 20:23:59.703697   77223 system_pods.go:61] "etcd-no-preload-475934" [725feb76-9ad0-4640-ba25-2eae13596bba] Running
	I1213 20:23:59.703703   77223 system_pods.go:61] "kube-apiserver-no-preload-475934" [56776240-3677-4af6-bba4-dd1a261d5560] Running
	I1213 20:23:59.703711   77223 system_pods.go:61] "kube-controller-manager-no-preload-475934" [86f1bb7e-ee5d-441d-a38a-1a0f74fec6e4] Running
	I1213 20:23:59.703716   77223 system_pods.go:61] "kube-proxy-s5k7k" [db2eddc8-a260-42e5-8590-3475eb56a54b] Running
	I1213 20:23:59.703721   77223 system_pods.go:61] "kube-scheduler-no-preload-475934" [5e10b82e-e677-4f7d-bbd5-6e494b0796af] Running
	I1213 20:23:59.703732   77223 system_pods.go:61] "metrics-server-6867b74b74-l2mch" [b7c19469-9a0d-4136-beed-c2c309e610cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 20:23:59.703742   77223 system_pods.go:61] "storage-provisioner" [1bfd0b04-9a54-4a03-8e93-ffe4566108a1] Running
	I1213 20:23:59.703752   77223 system_pods.go:74] duration metric: took 9.418447ms to wait for pod list to return data ...
	I1213 20:23:59.703761   77223 default_sa.go:34] waiting for default service account to be created ...
	I1213 20:23:59.713584   77223 default_sa.go:45] found service account: "default"
	I1213 20:23:59.713610   77223 default_sa.go:55] duration metric: took 9.841478ms for default service account to be created ...
	I1213 20:23:59.713621   77223 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 20:23:59.720207   77223 system_pods.go:86] 9 kube-system pods found
	I1213 20:23:59.720230   77223 system_pods.go:89] "coredns-7c65d6cfc9-gksk2" [2099250f-c8ad-4c8d-b5da-9468b16e90de] Running
	I1213 20:23:59.720236   77223 system_pods.go:89] "coredns-7c65d6cfc9-gl527" [974ba38b-6931-4e46-aece-5b72bffab803] Running
	I1213 20:23:59.720240   77223 system_pods.go:89] "etcd-no-preload-475934" [725feb76-9ad0-4640-ba25-2eae13596bba] Running
	I1213 20:23:59.720244   77223 system_pods.go:89] "kube-apiserver-no-preload-475934" [56776240-3677-4af6-bba4-dd1a261d5560] Running
	I1213 20:23:59.720247   77223 system_pods.go:89] "kube-controller-manager-no-preload-475934" [86f1bb7e-ee5d-441d-a38a-1a0f74fec6e4] Running
	I1213 20:23:59.720251   77223 system_pods.go:89] "kube-proxy-s5k7k" [db2eddc8-a260-42e5-8590-3475eb56a54b] Running
	I1213 20:23:59.720255   77223 system_pods.go:89] "kube-scheduler-no-preload-475934" [5e10b82e-e677-4f7d-bbd5-6e494b0796af] Running
	I1213 20:23:59.720268   77223 system_pods.go:89] "metrics-server-6867b74b74-l2mch" [b7c19469-9a0d-4136-beed-c2c309e610cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 20:23:59.720272   77223 system_pods.go:89] "storage-provisioner" [1bfd0b04-9a54-4a03-8e93-ffe4566108a1] Running
	I1213 20:23:59.720279   77223 system_pods.go:126] duration metric: took 6.653114ms to wait for k8s-apps to be running ...
	I1213 20:23:59.720288   77223 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 20:23:59.720325   77223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:23:59.743000   77223 system_svc.go:56] duration metric: took 22.70094ms WaitForService to wait for kubelet
	I1213 20:23:59.743035   77223 kubeadm.go:582] duration metric: took 8.947624109s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 20:23:59.743057   77223 node_conditions.go:102] verifying NodePressure condition ...
	I1213 20:23:59.747281   77223 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 20:23:59.747321   77223 node_conditions.go:123] node cpu capacity is 2
	I1213 20:23:59.747337   77223 node_conditions.go:105] duration metric: took 4.273745ms to run NodePressure ...
	I1213 20:23:59.747353   77223 start.go:241] waiting for startup goroutines ...
	I1213 20:23:59.747363   77223 start.go:246] waiting for cluster config update ...
	I1213 20:23:59.747380   77223 start.go:255] writing updated cluster config ...
	I1213 20:23:59.747732   77223 ssh_runner.go:195] Run: rm -f paused
	I1213 20:23:59.820239   77223 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1213 20:23:59.821954   77223 out.go:177] * Done! kubectl is now configured to use "no-preload-475934" cluster and "default" namespace by default
	I1213 20:23:58.293751   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.294127   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 20:23:58.294142   77510 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 20:23:58.294178   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHHostname
	I1213 20:23:58.294280   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:ab:46", ip: ""} in network mk-default-k8s-diff-port-355668: {Iface:virbr1 ExpiryTime:2024-12-13 21:18:42 +0000 UTC Type:0 Mac:52:54:00:22:ab:46 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:default-k8s-diff-port-355668 Clientid:01:52:54:00:22:ab:46}
	I1213 20:23:58.294376   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined IP address 192.168.39.233 and MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.294629   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHPort
	I1213 20:23:58.294779   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHKeyPath
	I1213 20:23:58.294932   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHUsername
	I1213 20:23:58.295104   77510 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/default-k8s-diff-port-355668/id_rsa Username:docker}
	I1213 20:23:58.296706   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.297082   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:ab:46", ip: ""} in network mk-default-k8s-diff-port-355668: {Iface:virbr1 ExpiryTime:2024-12-13 21:18:42 +0000 UTC Type:0 Mac:52:54:00:22:ab:46 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:default-k8s-diff-port-355668 Clientid:01:52:54:00:22:ab:46}
	I1213 20:23:58.297117   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined IP address 192.168.39.233 and MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.297252   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHPort
	I1213 20:23:58.297422   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHKeyPath
	I1213 20:23:58.297574   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHUsername
	I1213 20:23:58.297699   77510 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/default-k8s-diff-port-355668/id_rsa Username:docker}
	I1213 20:23:58.298144   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.298502   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:ab:46", ip: ""} in network mk-default-k8s-diff-port-355668: {Iface:virbr1 ExpiryTime:2024-12-13 21:18:42 +0000 UTC Type:0 Mac:52:54:00:22:ab:46 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:default-k8s-diff-port-355668 Clientid:01:52:54:00:22:ab:46}
	I1213 20:23:58.298608   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined IP address 192.168.39.233 and MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.298673   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHPort
	I1213 20:23:58.298828   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHKeyPath
	I1213 20:23:58.299124   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHUsername
	I1213 20:23:58.299253   77510 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/default-k8s-diff-port-355668/id_rsa Username:docker}
	I1213 20:23:58.437780   77510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 20:23:58.458240   77510 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-355668" to be "Ready" ...
	I1213 20:23:58.495039   77510 node_ready.go:49] node "default-k8s-diff-port-355668" has status "Ready":"True"
	I1213 20:23:58.495124   77510 node_ready.go:38] duration metric: took 36.851728ms for node "default-k8s-diff-port-355668" to be "Ready" ...
	I1213 20:23:58.495141   77510 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 20:23:58.506404   77510 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kl689" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:58.548351   77510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 20:23:58.548377   77510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 20:23:58.570739   77510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 20:23:58.570762   77510 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 20:23:58.591010   77510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:23:58.598380   77510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 20:23:58.598406   77510 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 20:23:58.612228   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 20:23:58.612255   77510 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 20:23:58.616620   77510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 20:23:58.643759   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 20:23:58.643785   77510 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 20:23:58.657745   77510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 20:23:58.696453   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 20:23:58.696548   77510 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 20:23:58.760682   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 20:23:58.760710   77510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 20:23:58.851490   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 20:23:58.851514   77510 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 20:23:58.930302   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 20:23:58.930330   77510 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 20:23:58.991218   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 20:23:58.991261   77510 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 20:23:59.066139   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 20:23:59.066169   77510 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 20:23:59.102453   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 20:23:59.102479   77510 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 20:23:59.182801   77510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 20:23:59.970886   77510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.379839482s)
	I1213 20:23:59.970942   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:59.970957   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:23:59.971058   77510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.354409285s)
	I1213 20:23:59.971081   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:59.971091   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:23:59.971200   77510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.313427588s)
	I1213 20:23:59.971217   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:59.971227   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:23:59.971296   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | Closing plugin on server side
	I1213 20:23:59.971333   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:59.971340   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:59.971348   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:59.971355   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:23:59.971564   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:59.971577   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:59.971587   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:59.971594   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:23:59.971800   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | Closing plugin on server side
	I1213 20:23:59.971830   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | Closing plugin on server side
	I1213 20:23:59.971836   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:59.971848   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:59.971861   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:59.971860   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:59.971873   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:59.971883   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:23:59.974115   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | Closing plugin on server side
	I1213 20:23:59.974153   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:59.974161   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:59.974168   77510 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-355668"
	I1213 20:23:59.974222   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | Closing plugin on server side
	I1213 20:23:59.974245   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:59.974255   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:00.001667   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:00.001698   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:24:00.002135   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:00.002164   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:00.002136   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | Closing plugin on server side
	I1213 20:24:00.532171   77510 pod_ready.go:103] pod "coredns-7c65d6cfc9-kl689" in "kube-system" namespace has status "Ready":"False"
	I1213 20:24:01.475325   77510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.292470675s)
	I1213 20:24:01.475377   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:01.475399   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:24:01.475719   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:01.475733   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:01.475742   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:01.475750   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:24:01.475977   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:01.475990   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:01.478505   77510 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-355668 addons enable metrics-server
	
	I1213 20:24:01.479872   77510 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1213 20:23:58.270264   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.270365   79820 main.go:141] libmachine: (newest-cni-535459) found domain IP: 192.168.50.11
	I1213 20:23:58.270394   79820 main.go:141] libmachine: (newest-cni-535459) reserving static IP address...
	I1213 20:23:58.270420   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has current primary IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.271183   79820 main.go:141] libmachine: (newest-cni-535459) reserved static IP address 192.168.50.11 for domain newest-cni-535459
	I1213 20:23:58.271227   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "newest-cni-535459", mac: "52:54:00:7d:17:89", ip: "192.168.50.11"} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.271247   79820 main.go:141] libmachine: (newest-cni-535459) waiting for SSH...
	I1213 20:23:58.271278   79820 main.go:141] libmachine: (newest-cni-535459) DBG | skip adding static IP to network mk-newest-cni-535459 - found existing host DHCP lease matching {name: "newest-cni-535459", mac: "52:54:00:7d:17:89", ip: "192.168.50.11"}
	I1213 20:23:58.271286   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Getting to WaitForSSH function...
	I1213 20:23:58.277440   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.283137   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.283166   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.283641   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Using SSH client type: external
	I1213 20:23:58.283664   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Using SSH private key: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa (-rw-------)
	I1213 20:23:58.283702   79820 main.go:141] libmachine: (newest-cni-535459) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 20:23:58.283712   79820 main.go:141] libmachine: (newest-cni-535459) DBG | About to run SSH command:
	I1213 20:23:58.283724   79820 main.go:141] libmachine: (newest-cni-535459) DBG | exit 0
	I1213 20:23:58.431895   79820 main.go:141] libmachine: (newest-cni-535459) DBG | SSH cmd err, output: <nil>: 
	I1213 20:23:58.432276   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetConfigRaw
	I1213 20:23:58.433028   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetIP
	I1213 20:23:58.436521   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.436848   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.436875   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.437192   79820 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/config.json ...
	I1213 20:23:58.437455   79820 machine.go:93] provisionDockerMachine start ...
	I1213 20:23:58.437480   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:58.437689   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:58.440580   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.441089   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.441132   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.441277   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:58.441491   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:58.441620   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:58.441769   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:58.441918   79820 main.go:141] libmachine: Using SSH client type: native
	I1213 20:23:58.442164   79820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I1213 20:23:58.442183   79820 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 20:23:58.559163   79820 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 20:23:58.559200   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetMachineName
	I1213 20:23:58.559468   79820 buildroot.go:166] provisioning hostname "newest-cni-535459"
	I1213 20:23:58.559498   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetMachineName
	I1213 20:23:58.559678   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:58.562818   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.563374   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.563402   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.563582   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:58.563766   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:58.563919   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:58.564082   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:58.564268   79820 main.go:141] libmachine: Using SSH client type: native
	I1213 20:23:58.564508   79820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I1213 20:23:58.564530   79820 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-535459 && echo "newest-cni-535459" | sudo tee /etc/hostname
	I1213 20:23:58.696712   79820 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-535459
	
	I1213 20:23:58.696798   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:58.700359   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.700838   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.700864   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.701015   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:58.701205   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:58.701411   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:58.701579   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:58.701764   79820 main.go:141] libmachine: Using SSH client type: native
	I1213 20:23:58.702008   79820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I1213 20:23:58.702036   79820 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-535459' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-535459/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-535459' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 20:23:58.827902   79820 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 20:23:58.827937   79820 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20090-12353/.minikube CaCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20090-12353/.minikube}
	I1213 20:23:58.827979   79820 buildroot.go:174] setting up certificates
	I1213 20:23:58.827999   79820 provision.go:84] configureAuth start
	I1213 20:23:58.828016   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetMachineName
	I1213 20:23:58.828306   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetIP
	I1213 20:23:58.831180   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.831550   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.831588   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.831736   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:58.833951   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.834312   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.834355   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.834505   79820 provision.go:143] copyHostCerts
	I1213 20:23:58.834581   79820 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem, removing ...
	I1213 20:23:58.834598   79820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem
	I1213 20:23:58.834689   79820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem (1082 bytes)
	I1213 20:23:58.834879   79820 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem, removing ...
	I1213 20:23:58.834898   79820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem
	I1213 20:23:58.834948   79820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem (1123 bytes)
	I1213 20:23:58.835048   79820 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem, removing ...
	I1213 20:23:58.835067   79820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem
	I1213 20:23:58.835107   79820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem (1675 bytes)
	I1213 20:23:58.835195   79820 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem org=jenkins.newest-cni-535459 san=[127.0.0.1 192.168.50.11 localhost minikube newest-cni-535459]
	I1213 20:23:59.091370   79820 provision.go:177] copyRemoteCerts
	I1213 20:23:59.091432   79820 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 20:23:59.091482   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:59.094717   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.095146   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.095177   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.095370   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:59.095547   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.095707   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:59.095832   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:23:59.177442   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 20:23:59.202054   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 20:23:59.228527   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 20:23:59.254148   79820 provision.go:87] duration metric: took 426.134893ms to configureAuth
	I1213 20:23:59.254187   79820 buildroot.go:189] setting minikube options for container-runtime
	I1213 20:23:59.254402   79820 config.go:182] Loaded profile config "newest-cni-535459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:23:59.254467   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:59.257684   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.258113   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.258139   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.258369   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:59.258575   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.258743   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.258913   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:59.259101   79820 main.go:141] libmachine: Using SSH client type: native
	I1213 20:23:59.259355   79820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I1213 20:23:59.259378   79820 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 20:23:59.495940   79820 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 20:23:59.495974   79820 machine.go:96] duration metric: took 1.058500785s to provisionDockerMachine
	I1213 20:23:59.495990   79820 start.go:293] postStartSetup for "newest-cni-535459" (driver="kvm2")
	I1213 20:23:59.496006   79820 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 20:23:59.496029   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:59.496330   79820 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 20:23:59.496359   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:59.499780   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.500193   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.500234   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.500450   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:59.500642   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.500813   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:59.500918   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:23:59.582993   79820 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 20:23:59.588260   79820 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 20:23:59.588297   79820 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-12353/.minikube/addons for local assets ...
	I1213 20:23:59.588362   79820 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-12353/.minikube/files for local assets ...
	I1213 20:23:59.588431   79820 filesync.go:149] local asset: /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem -> 195442.pem in /etc/ssl/certs
	I1213 20:23:59.588562   79820 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 20:23:59.601947   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem --> /etc/ssl/certs/195442.pem (1708 bytes)
	I1213 20:23:59.631405   79820 start.go:296] duration metric: took 135.398616ms for postStartSetup
	I1213 20:23:59.631454   79820 fix.go:56] duration metric: took 21.330020412s for fixHost
	I1213 20:23:59.631480   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:59.634516   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.634952   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.635000   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.635198   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:59.635387   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.635543   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.635691   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:59.635840   79820 main.go:141] libmachine: Using SSH client type: native
	I1213 20:23:59.636070   79820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I1213 20:23:59.636084   79820 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 20:23:59.749289   79820 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734121439.718006490
	
	I1213 20:23:59.749313   79820 fix.go:216] guest clock: 1734121439.718006490
	I1213 20:23:59.749322   79820 fix.go:229] Guest: 2024-12-13 20:23:59.71800649 +0000 UTC Remote: 2024-12-13 20:23:59.631459768 +0000 UTC m=+21.470518452 (delta=86.546722ms)
	I1213 20:23:59.749347   79820 fix.go:200] guest clock delta is within tolerance: 86.546722ms
	I1213 20:23:59.749361   79820 start.go:83] releasing machines lock for "newest-cni-535459", held for 21.447944205s
	I1213 20:23:59.749385   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:59.749655   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetIP
	I1213 20:23:59.752968   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.753402   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.753426   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.753606   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:59.754075   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:59.754269   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:59.754364   79820 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 20:23:59.754400   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:59.754690   79820 ssh_runner.go:195] Run: cat /version.json
	I1213 20:23:59.754714   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:59.757878   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.767628   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.767685   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.768022   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.768079   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:59.768303   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.768325   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.768458   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:59.768631   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.768681   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.768814   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:59.768849   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:59.769016   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:23:59.769027   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:23:59.888086   79820 ssh_runner.go:195] Run: systemctl --version
	I1213 20:23:59.899362   79820 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 20:24:00.063446   79820 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 20:24:00.072249   79820 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 20:24:00.072336   79820 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 20:24:00.093748   79820 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 20:24:00.093780   79820 start.go:495] detecting cgroup driver to use...
	I1213 20:24:00.093849   79820 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 20:24:00.117356   79820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 20:24:00.135377   79820 docker.go:217] disabling cri-docker service (if available) ...
	I1213 20:24:00.135437   79820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 20:24:00.155178   79820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 20:24:00.171890   79820 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 20:24:00.321669   79820 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 20:24:00.533366   79820 docker.go:233] disabling docker service ...
	I1213 20:24:00.533432   79820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 20:24:00.551511   79820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 20:24:00.569283   79820 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 20:24:00.748948   79820 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 20:24:00.924287   79820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 20:24:00.938559   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 20:24:00.958306   79820 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1213 20:24:00.958394   79820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:00.968592   79820 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 20:24:00.968667   79820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:00.979213   79820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:00.993825   79820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:01.004141   79820 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 20:24:01.015195   79820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:01.025731   79820 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:01.048789   79820 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:01.062542   79820 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 20:24:01.074137   79820 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 20:24:01.074218   79820 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 20:24:01.091233   79820 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 20:24:01.103721   79820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:24:01.274965   79820 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 20:24:01.400580   79820 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 20:24:01.400700   79820 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 20:24:01.406514   79820 start.go:563] Will wait 60s for crictl version
	I1213 20:24:01.406581   79820 ssh_runner.go:195] Run: which crictl
	I1213 20:24:01.411798   79820 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 20:24:01.463581   79820 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 20:24:01.463672   79820 ssh_runner.go:195] Run: crio --version
	I1213 20:24:01.503505   79820 ssh_runner.go:195] Run: crio --version
	I1213 20:24:01.545804   79820 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1213 20:24:01.547133   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetIP
	I1213 20:24:01.550717   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:01.551167   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:24:01.551198   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:01.551399   79820 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1213 20:24:01.555655   79820 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 20:24:01.574604   79820 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 20:23:57.815345   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:57.830459   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:57.830536   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:57.867421   78367 cri.go:89] found id: ""
	I1213 20:23:57.867450   78367 logs.go:282] 0 containers: []
	W1213 20:23:57.867462   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:57.867470   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:57.867528   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:57.904972   78367 cri.go:89] found id: ""
	I1213 20:23:57.905010   78367 logs.go:282] 0 containers: []
	W1213 20:23:57.905021   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:57.905029   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:57.905092   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:57.951889   78367 cri.go:89] found id: ""
	I1213 20:23:57.951916   78367 logs.go:282] 0 containers: []
	W1213 20:23:57.951928   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:57.951936   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:57.952010   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:57.998664   78367 cri.go:89] found id: ""
	I1213 20:23:57.998697   78367 logs.go:282] 0 containers: []
	W1213 20:23:57.998708   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:57.998715   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:57.998772   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:58.047566   78367 cri.go:89] found id: ""
	I1213 20:23:58.047597   78367 logs.go:282] 0 containers: []
	W1213 20:23:58.047608   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:58.047625   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:58.047686   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:58.082590   78367 cri.go:89] found id: ""
	I1213 20:23:58.082619   78367 logs.go:282] 0 containers: []
	W1213 20:23:58.082629   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:58.082637   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:58.082694   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:58.125035   78367 cri.go:89] found id: ""
	I1213 20:23:58.125071   78367 logs.go:282] 0 containers: []
	W1213 20:23:58.125080   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:58.125087   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:58.125147   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:58.168019   78367 cri.go:89] found id: ""
	I1213 20:23:58.168049   78367 logs.go:282] 0 containers: []
	W1213 20:23:58.168060   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:58.168078   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:58.168092   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:58.268179   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:58.268212   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:58.303166   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:58.303192   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:58.393172   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:58.393206   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:58.393220   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:58.489198   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:58.489230   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:01.033661   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:01.047673   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:01.047747   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:01.089498   78367 cri.go:89] found id: ""
	I1213 20:24:01.089526   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.089536   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:01.089543   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:01.089605   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:01.130215   78367 cri.go:89] found id: ""
	I1213 20:24:01.130245   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.130256   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:01.130264   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:01.130326   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:01.177064   78367 cri.go:89] found id: ""
	I1213 20:24:01.177102   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.177119   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:01.177126   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:01.177187   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:01.231277   78367 cri.go:89] found id: ""
	I1213 20:24:01.231312   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.231324   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:01.231332   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:01.231395   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:01.277419   78367 cri.go:89] found id: ""
	I1213 20:24:01.277446   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.277456   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:01.277463   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:01.277519   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:01.322970   78367 cri.go:89] found id: ""
	I1213 20:24:01.322996   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.323007   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:01.323017   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:01.323087   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:01.369554   78367 cri.go:89] found id: ""
	I1213 20:24:01.369585   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.369596   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:01.369603   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:01.369661   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:01.411927   78367 cri.go:89] found id: ""
	I1213 20:24:01.411957   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.411967   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:01.411987   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:01.412005   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:01.486061   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:01.486097   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:01.500644   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:01.500673   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:01.578266   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:01.578283   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:01.578293   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:01.575794   79820 kubeadm.go:883] updating cluster {Name:newest-cni-535459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:newest-cni-535459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 20:24:01.575963   79820 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 20:24:01.576035   79820 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 20:24:01.617299   79820 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1213 20:24:01.617414   79820 ssh_runner.go:195] Run: which lz4
	I1213 20:24:01.621480   79820 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 20:24:01.625517   79820 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 20:24:01.625563   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1213 20:24:03.034691   79820 crio.go:462] duration metric: took 1.413259837s to copy over tarball
	I1213 20:24:03.034768   79820 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 20:24:01.481491   77510 addons.go:510] duration metric: took 3.281543559s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1213 20:24:02.601672   77510 pod_ready.go:103] pod "coredns-7c65d6cfc9-kl689" in "kube-system" namespace has status "Ready":"False"
	I1213 20:24:01.687325   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:01.687362   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:04.239043   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:04.252218   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:04.252292   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:04.294778   78367 cri.go:89] found id: ""
	I1213 20:24:04.294810   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.294820   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:04.294828   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:04.294910   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:04.339012   78367 cri.go:89] found id: ""
	I1213 20:24:04.339049   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.339061   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:04.339069   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:04.339134   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:04.391028   78367 cri.go:89] found id: ""
	I1213 20:24:04.391064   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.391076   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:04.391084   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:04.391147   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:04.436260   78367 cri.go:89] found id: ""
	I1213 20:24:04.436291   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.436308   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:04.436316   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:04.436372   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:04.485225   78367 cri.go:89] found id: ""
	I1213 20:24:04.485255   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.485274   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:04.485283   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:04.485347   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:04.527198   78367 cri.go:89] found id: ""
	I1213 20:24:04.527228   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.527239   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:04.527247   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:04.527306   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:04.567885   78367 cri.go:89] found id: ""
	I1213 20:24:04.567915   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.567926   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:04.567934   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:04.567984   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:04.608495   78367 cri.go:89] found id: ""
	I1213 20:24:04.608535   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.608546   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:04.608557   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:04.608571   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:04.691701   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:04.691735   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:04.739203   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:04.739236   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:04.815994   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:04.816050   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:04.851237   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:04.851277   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:04.994736   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:05.429979   79820 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.395156779s)
	I1213 20:24:05.430008   79820 crio.go:469] duration metric: took 2.395289211s to extract the tarball
	I1213 20:24:05.430017   79820 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 20:24:05.486315   79820 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 20:24:05.546704   79820 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 20:24:05.546729   79820 cache_images.go:84] Images are preloaded, skipping loading
	I1213 20:24:05.546737   79820 kubeadm.go:934] updating node { 192.168.50.11 8443 v1.31.2 crio true true} ...
	I1213 20:24:05.546882   79820 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-535459 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-535459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 20:24:05.546997   79820 ssh_runner.go:195] Run: crio config
	I1213 20:24:05.617708   79820 cni.go:84] Creating CNI manager for ""
	I1213 20:24:05.617734   79820 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:24:05.617757   79820 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1213 20:24:05.617784   79820 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.11 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-535459 NodeName:newest-cni-535459 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 20:24:05.617925   79820 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-535459"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.11"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.11"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 20:24:05.618013   79820 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 20:24:05.631181   79820 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 20:24:05.631261   79820 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 20:24:05.642971   79820 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1213 20:24:05.662761   79820 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 20:24:05.682676   79820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I1213 20:24:05.706170   79820 ssh_runner.go:195] Run: grep 192.168.50.11	control-plane.minikube.internal$ /etc/hosts
	I1213 20:24:05.710946   79820 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 20:24:05.733291   79820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:24:05.878920   79820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 20:24:05.899390   79820 certs.go:68] Setting up /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459 for IP: 192.168.50.11
	I1213 20:24:05.899419   79820 certs.go:194] generating shared ca certs ...
	I1213 20:24:05.899438   79820 certs.go:226] acquiring lock for ca certs: {Name:mka8994129240986519f4b0ac41f1e4e27ada985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:24:05.899615   79820 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key
	I1213 20:24:05.899668   79820 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key
	I1213 20:24:05.899681   79820 certs.go:256] generating profile certs ...
	I1213 20:24:05.899786   79820 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/client.key
	I1213 20:24:05.899867   79820 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/apiserver.key.6c5572a8
	I1213 20:24:05.899919   79820 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/proxy-client.key
	I1213 20:24:05.900072   79820 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544.pem (1338 bytes)
	W1213 20:24:05.900112   79820 certs.go:480] ignoring /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544_empty.pem, impossibly tiny 0 bytes
	I1213 20:24:05.900124   79820 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem (1679 bytes)
	I1213 20:24:05.900156   79820 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem (1082 bytes)
	I1213 20:24:05.900187   79820 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem (1123 bytes)
	I1213 20:24:05.900215   79820 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem (1675 bytes)
	I1213 20:24:05.900269   79820 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem (1708 bytes)
	I1213 20:24:05.901141   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 20:24:05.939874   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 20:24:05.978129   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 20:24:06.014027   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 20:24:06.054231   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 20:24:06.082617   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 20:24:06.113846   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 20:24:06.160961   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 20:24:06.186616   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544.pem --> /usr/share/ca-certificates/19544.pem (1338 bytes)
	I1213 20:24:06.210814   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem --> /usr/share/ca-certificates/195442.pem (1708 bytes)
	I1213 20:24:06.235875   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 20:24:06.268351   79820 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 20:24:06.289062   79820 ssh_runner.go:195] Run: openssl version
	I1213 20:24:06.295624   79820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19544.pem && ln -fs /usr/share/ca-certificates/19544.pem /etc/ssl/certs/19544.pem"
	I1213 20:24:06.309685   79820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19544.pem
	I1213 20:24:06.314119   79820 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 19:13 /usr/share/ca-certificates/19544.pem
	I1213 20:24:06.314222   79820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19544.pem
	I1213 20:24:06.320247   79820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19544.pem /etc/ssl/certs/51391683.0"
	I1213 20:24:06.331949   79820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/195442.pem && ln -fs /usr/share/ca-certificates/195442.pem /etc/ssl/certs/195442.pem"
	I1213 20:24:06.343731   79820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/195442.pem
	I1213 20:24:06.348018   79820 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 19:13 /usr/share/ca-certificates/195442.pem
	I1213 20:24:06.348081   79820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/195442.pem
	I1213 20:24:06.353554   79820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/195442.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 20:24:06.366858   79820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 20:24:06.377728   79820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:24:06.382326   79820 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:24:06.382401   79820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:24:06.390103   79820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 20:24:06.404838   79820 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 20:24:06.410770   79820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 20:24:06.422025   79820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 20:24:06.431833   79820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 20:24:06.438647   79820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 20:24:06.444814   79820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 20:24:06.452219   79820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 20:24:06.458272   79820 kubeadm.go:392] StartCluster: {Name:newest-cni-535459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:newest-cni-535459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil>
ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:24:06.458424   79820 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 20:24:06.458491   79820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 20:24:06.506732   79820 cri.go:89] found id: ""
	I1213 20:24:06.506810   79820 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 20:24:06.518343   79820 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1213 20:24:06.518376   79820 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1213 20:24:06.518430   79820 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 20:24:06.531209   79820 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 20:24:06.532070   79820 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-535459" does not appear in /home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:24:06.532572   79820 kubeconfig.go:62] /home/jenkins/minikube-integration/20090-12353/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-535459" cluster setting kubeconfig missing "newest-cni-535459" context setting]
	I1213 20:24:06.533290   79820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/kubeconfig: {Name:mkeeacf16d2513309766df13b67a96dd252bc4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:24:06.539651   79820 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 20:24:06.550828   79820 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.11
	I1213 20:24:06.550886   79820 kubeadm.go:1160] stopping kube-system containers ...
	I1213 20:24:06.550902   79820 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 20:24:06.550970   79820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 20:24:06.612618   79820 cri.go:89] found id: ""
	I1213 20:24:06.612750   79820 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 20:24:06.636007   79820 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:24:06.648489   79820 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:24:06.648512   79820 kubeadm.go:157] found existing configuration files:
	
	I1213 20:24:06.648563   79820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:24:06.660079   79820 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:24:06.660154   79820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:24:06.672333   79820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:24:06.683617   79820 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:24:06.683683   79820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:24:06.695818   79820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:24:06.706996   79820 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:24:06.707073   79820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:24:06.718672   79820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:24:06.729768   79820 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:24:06.729838   79820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:24:06.742002   79820 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 20:24:06.754184   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:24:07.010247   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:24:08.064932   79820 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.054652155s)
	I1213 20:24:08.064963   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:24:05.014076   77510 pod_ready.go:103] pod "coredns-7c65d6cfc9-kl689" in "kube-system" namespace has status "Ready":"False"
	I1213 20:24:06.021280   77510 pod_ready.go:93] pod "coredns-7c65d6cfc9-kl689" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:06.021310   77510 pod_ready.go:82] duration metric: took 7.514875372s for pod "coredns-7c65d6cfc9-kl689" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.021326   77510 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sk656" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.035861   77510 pod_ready.go:93] pod "coredns-7c65d6cfc9-sk656" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:06.035888   77510 pod_ready.go:82] duration metric: took 14.555021ms for pod "coredns-7c65d6cfc9-sk656" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.035900   77510 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.979006   77510 pod_ready.go:93] pod "etcd-default-k8s-diff-port-355668" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:06.979035   77510 pod_ready.go:82] duration metric: took 943.126351ms for pod "etcd-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.979049   77510 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.989635   77510 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-355668" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:06.989665   77510 pod_ready.go:82] duration metric: took 10.607567ms for pod "kube-apiserver-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.989677   77510 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.999141   77510 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-355668" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:06.999235   77510 pod_ready.go:82] duration metric: took 9.54585ms for pod "kube-controller-manager-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.999273   77510 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vjsf7" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:07.012290   77510 pod_ready.go:93] pod "kube-proxy-vjsf7" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:07.012314   77510 pod_ready.go:82] duration metric: took 13.004089ms for pod "kube-proxy-vjsf7" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:07.012327   77510 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:07.842063   77510 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-355668" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:07.842088   77510 pod_ready.go:82] duration metric: took 829.753011ms for pod "kube-scheduler-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:07.842099   77510 pod_ready.go:39] duration metric: took 9.346942648s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 20:24:07.842114   77510 api_server.go:52] waiting for apiserver process to appear ...
	I1213 20:24:07.842174   77510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:07.858079   77510 api_server.go:72] duration metric: took 9.658239691s to wait for apiserver process to appear ...
	I1213 20:24:07.858107   77510 api_server.go:88] waiting for apiserver healthz status ...
	I1213 20:24:07.858133   77510 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8444/healthz ...
	I1213 20:24:07.864534   77510 api_server.go:279] https://192.168.39.233:8444/healthz returned 200:
	ok
	I1213 20:24:07.865713   77510 api_server.go:141] control plane version: v1.31.2
	I1213 20:24:07.865744   77510 api_server.go:131] duration metric: took 7.628649ms to wait for apiserver health ...
	I1213 20:24:07.865758   77510 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 20:24:07.872447   77510 system_pods.go:59] 9 kube-system pods found
	I1213 20:24:07.872473   77510 system_pods.go:61] "coredns-7c65d6cfc9-kl689" [37fe56ef-63a9-4777-87e0-495d71277e32] Running
	I1213 20:24:07.872478   77510 system_pods.go:61] "coredns-7c65d6cfc9-sk656" [f3071d78-0070-472d-a0e2-2ce271a37c20] Running
	I1213 20:24:07.872482   77510 system_pods.go:61] "etcd-default-k8s-diff-port-355668" [c8d8c66d-39e0-4b19-a3f2-63d5a66e05e9] Running
	I1213 20:24:07.872486   77510 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-355668" [77c99748-98ec-47a4-85d2-a2908f14c29b] Running
	I1213 20:24:07.872490   77510 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-355668" [44186a3f-4958-4b0c-82ae-48959fad9597] Running
	I1213 20:24:07.872492   77510 system_pods.go:61] "kube-proxy-vjsf7" [fcb2ebe1-bd40-48e1-8f88-a667f9f07d15] Running
	I1213 20:24:07.872496   77510 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-355668" [8184208a-8949-4050-abac-4fcc78237ecf] Running
	I1213 20:24:07.872502   77510 system_pods.go:61] "metrics-server-6867b74b74-8qvr9" [e67db0c2-4c1a-46a1-a61f-103019663d57] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 20:24:07.872507   77510 system_pods.go:61] "storage-provisioner" [c9bd91ad-91f6-44ec-a845-f9accf0261e1] Running
	I1213 20:24:07.872518   77510 system_pods.go:74] duration metric: took 6.753419ms to wait for pod list to return data ...
	I1213 20:24:07.872532   77510 default_sa.go:34] waiting for default service account to be created ...
	I1213 20:24:07.875714   77510 default_sa.go:45] found service account: "default"
	I1213 20:24:07.875737   77510 default_sa.go:55] duration metric: took 3.19796ms for default service account to be created ...
	I1213 20:24:07.875748   77510 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 20:24:07.881451   77510 system_pods.go:86] 9 kube-system pods found
	I1213 20:24:07.881474   77510 system_pods.go:89] "coredns-7c65d6cfc9-kl689" [37fe56ef-63a9-4777-87e0-495d71277e32] Running
	I1213 20:24:07.881480   77510 system_pods.go:89] "coredns-7c65d6cfc9-sk656" [f3071d78-0070-472d-a0e2-2ce271a37c20] Running
	I1213 20:24:07.881484   77510 system_pods.go:89] "etcd-default-k8s-diff-port-355668" [c8d8c66d-39e0-4b19-a3f2-63d5a66e05e9] Running
	I1213 20:24:07.881489   77510 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-355668" [77c99748-98ec-47a4-85d2-a2908f14c29b] Running
	I1213 20:24:07.881493   77510 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-355668" [44186a3f-4958-4b0c-82ae-48959fad9597] Running
	I1213 20:24:07.881496   77510 system_pods.go:89] "kube-proxy-vjsf7" [fcb2ebe1-bd40-48e1-8f88-a667f9f07d15] Running
	I1213 20:24:07.881500   77510 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-355668" [8184208a-8949-4050-abac-4fcc78237ecf] Running
	I1213 20:24:07.881507   77510 system_pods.go:89] "metrics-server-6867b74b74-8qvr9" [e67db0c2-4c1a-46a1-a61f-103019663d57] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 20:24:07.881512   77510 system_pods.go:89] "storage-provisioner" [c9bd91ad-91f6-44ec-a845-f9accf0261e1] Running
	I1213 20:24:07.881519   77510 system_pods.go:126] duration metric: took 5.765842ms to wait for k8s-apps to be running ...
	I1213 20:24:07.881529   77510 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 20:24:07.881576   77510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:24:07.896968   77510 system_svc.go:56] duration metric: took 15.429735ms WaitForService to wait for kubelet
	I1213 20:24:07.897000   77510 kubeadm.go:582] duration metric: took 9.69716545s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 20:24:07.897023   77510 node_conditions.go:102] verifying NodePressure condition ...
	I1213 20:24:08.181918   77510 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 20:24:08.181946   77510 node_conditions.go:123] node cpu capacity is 2
	I1213 20:24:08.181959   77510 node_conditions.go:105] duration metric: took 284.930197ms to run NodePressure ...
	I1213 20:24:08.181973   77510 start.go:241] waiting for startup goroutines ...
	I1213 20:24:08.181983   77510 start.go:246] waiting for cluster config update ...
	I1213 20:24:08.181997   77510 start.go:255] writing updated cluster config ...
	I1213 20:24:08.257251   77510 ssh_runner.go:195] Run: rm -f paused
	I1213 20:24:08.310968   77510 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1213 20:24:08.560633   77510 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-355668" cluster and "default" namespace by default
	I1213 20:24:07.495945   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:07.509565   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:07.509640   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:07.548332   78367 cri.go:89] found id: ""
	I1213 20:24:07.548357   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.548365   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:07.548371   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:07.548417   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:07.585718   78367 cri.go:89] found id: ""
	I1213 20:24:07.585745   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.585752   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:07.585758   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:07.585816   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:07.620441   78367 cri.go:89] found id: ""
	I1213 20:24:07.620470   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.620478   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:07.620485   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:07.620543   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:07.654638   78367 cri.go:89] found id: ""
	I1213 20:24:07.654671   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.654682   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:07.654690   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:07.654752   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:07.690251   78367 cri.go:89] found id: ""
	I1213 20:24:07.690279   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.690289   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:07.690296   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:07.690362   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:07.733229   78367 cri.go:89] found id: ""
	I1213 20:24:07.733260   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.733268   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:07.733274   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:07.733325   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:07.767187   78367 cri.go:89] found id: ""
	I1213 20:24:07.767218   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.767229   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:07.767237   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:07.767309   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:07.803454   78367 cri.go:89] found id: ""
	I1213 20:24:07.803477   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.803485   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:07.803493   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:07.803504   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:07.884578   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:07.884602   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:07.884616   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:07.966402   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:07.966448   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:08.010335   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:08.010368   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:08.064614   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:08.064647   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:10.580540   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:10.597959   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:10.598030   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:10.667638   78367 cri.go:89] found id: ""
	I1213 20:24:10.667665   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.667675   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:10.667683   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:10.667739   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:10.728894   78367 cri.go:89] found id: ""
	I1213 20:24:10.728918   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.728929   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:10.728936   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:10.728992   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:10.771954   78367 cri.go:89] found id: ""
	I1213 20:24:10.771991   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.772001   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:10.772009   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:10.772067   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:10.818154   78367 cri.go:89] found id: ""
	I1213 20:24:10.818181   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.818188   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:10.818193   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:10.818240   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:10.858974   78367 cri.go:89] found id: ""
	I1213 20:24:10.859003   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.859014   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:10.859021   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:10.859086   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:10.908481   78367 cri.go:89] found id: ""
	I1213 20:24:10.908511   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.908524   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:10.908532   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:10.908604   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:10.944951   78367 cri.go:89] found id: ""
	I1213 20:24:10.944979   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.944987   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:10.945001   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:10.945064   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:10.979563   78367 cri.go:89] found id: ""
	I1213 20:24:10.979588   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.979596   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:10.979604   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:10.979616   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:11.052472   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:11.052507   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:11.068916   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:11.068947   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:11.146800   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:11.146826   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:11.146839   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:11.248307   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:11.248347   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:08.321808   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:24:08.374083   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:24:08.441322   79820 api_server.go:52] waiting for apiserver process to appear ...
	I1213 20:24:08.441414   79820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:08.942600   79820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:09.441659   79820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:09.480026   79820 api_server.go:72] duration metric: took 1.038702713s to wait for apiserver process to appear ...
	I1213 20:24:09.480059   79820 api_server.go:88] waiting for apiserver healthz status ...
	I1213 20:24:09.480084   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:09.480678   79820 api_server.go:269] stopped: https://192.168.50.11:8443/healthz: Get "https://192.168.50.11:8443/healthz": dial tcp 192.168.50.11:8443: connect: connection refused
	I1213 20:24:09.980257   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:12.178320   79820 api_server.go:279] https://192.168.50.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 20:24:12.178365   79820 api_server.go:103] status: https://192.168.50.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 20:24:12.178382   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:12.185253   79820 api_server.go:279] https://192.168.50.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 20:24:12.185281   79820 api_server.go:103] status: https://192.168.50.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 20:24:12.480680   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:12.491410   79820 api_server.go:279] https://192.168.50.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 20:24:12.491444   79820 api_server.go:103] status: https://192.168.50.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 20:24:12.981159   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:12.986141   79820 api_server.go:279] https://192.168.50.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 20:24:12.986171   79820 api_server.go:103] status: https://192.168.50.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 20:24:13.480205   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:13.485225   79820 api_server.go:279] https://192.168.50.11:8443/healthz returned 200:
	ok
	I1213 20:24:13.494430   79820 api_server.go:141] control plane version: v1.31.2
	I1213 20:24:13.494452   79820 api_server.go:131] duration metric: took 4.014386318s to wait for apiserver health ...
	I1213 20:24:13.494460   79820 cni.go:84] Creating CNI manager for ""
	I1213 20:24:13.494465   79820 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:24:13.496012   79820 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 20:24:13.497376   79820 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 20:24:13.511144   79820 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 20:24:13.533969   79820 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 20:24:13.556295   79820 system_pods.go:59] 8 kube-system pods found
	I1213 20:24:13.556338   79820 system_pods.go:61] "coredns-7c65d6cfc9-q6mqc" [9f65c257-99b6-466f-91ae-9676625eb834] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 20:24:13.556349   79820 system_pods.go:61] "etcd-newest-cni-535459" [b491d2e0-2d34-4f0b-abf3-1d212ba9f422] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 20:24:13.556359   79820 system_pods.go:61] "kube-apiserver-newest-cni-535459" [6aeeeaed-b2ec-4c7d-ac94-215b57c0bd45] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 20:24:13.556368   79820 system_pods.go:61] "kube-controller-manager-newest-cni-535459" [51cd3848-17b3-493a-87db-d16192d55533] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 20:24:13.556384   79820 system_pods.go:61] "kube-proxy-msh9m" [e538f898-3a04-4e6f-bbf2-fc7fb13b43f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 20:24:13.556397   79820 system_pods.go:61] "kube-scheduler-newest-cni-535459" [90d47a04-6a40-461b-a19e-cc3d8a7b92ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 20:24:13.556406   79820 system_pods.go:61] "metrics-server-6867b74b74-29j2k" [cb907d37-be2a-4579-ba77-9c5add245ec1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 20:24:13.556420   79820 system_pods.go:61] "storage-provisioner" [de0598b8-996f-4307-b6c8-e81fa10d6f47] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 20:24:13.556432   79820 system_pods.go:74] duration metric: took 22.427974ms to wait for pod list to return data ...
	I1213 20:24:13.556444   79820 node_conditions.go:102] verifying NodePressure condition ...
	I1213 20:24:13.563220   79820 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 20:24:13.563264   79820 node_conditions.go:123] node cpu capacity is 2
	I1213 20:24:13.563277   79820 node_conditions.go:105] duration metric: took 6.825662ms to run NodePressure ...
	I1213 20:24:13.563301   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:24:13.855672   79820 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 20:24:13.870068   79820 ops.go:34] apiserver oom_adj: -16
	I1213 20:24:13.870105   79820 kubeadm.go:597] duration metric: took 7.351714184s to restartPrimaryControlPlane
	I1213 20:24:13.870119   79820 kubeadm.go:394] duration metric: took 7.411858052s to StartCluster
	I1213 20:24:13.870140   79820 settings.go:142] acquiring lock: {Name:mkc90da34b53323b31b6e69f8fab5ad7b1bdb254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:24:13.870220   79820 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:24:13.871661   79820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/kubeconfig: {Name:mkeeacf16d2513309766df13b67a96dd252bc4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:24:13.871898   79820 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 20:24:13.871961   79820 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 20:24:13.872063   79820 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-535459"
	I1213 20:24:13.872081   79820 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-535459"
	W1213 20:24:13.872093   79820 addons.go:243] addon storage-provisioner should already be in state true
	I1213 20:24:13.872124   79820 host.go:66] Checking if "newest-cni-535459" exists ...
	I1213 20:24:13.872109   79820 addons.go:69] Setting default-storageclass=true in profile "newest-cni-535459"
	I1213 20:24:13.872135   79820 addons.go:69] Setting metrics-server=true in profile "newest-cni-535459"
	I1213 20:24:13.872156   79820 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-535459"
	I1213 20:24:13.872143   79820 addons.go:69] Setting dashboard=true in profile "newest-cni-535459"
	I1213 20:24:13.872165   79820 addons.go:234] Setting addon metrics-server=true in "newest-cni-535459"
	I1213 20:24:13.872174   79820 config.go:182] Loaded profile config "newest-cni-535459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1213 20:24:13.872182   79820 addons.go:243] addon metrics-server should already be in state true
	I1213 20:24:13.872219   79820 host.go:66] Checking if "newest-cni-535459" exists ...
	I1213 20:24:13.872182   79820 addons.go:234] Setting addon dashboard=true in "newest-cni-535459"
	W1213 20:24:13.872286   79820 addons.go:243] addon dashboard should already be in state true
	I1213 20:24:13.872327   79820 host.go:66] Checking if "newest-cni-535459" exists ...
	I1213 20:24:13.872589   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.872598   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.872618   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.872634   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.872647   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.872667   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.872703   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.872640   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.874676   79820 out.go:177] * Verifying Kubernetes components...
	I1213 20:24:13.875998   79820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:24:13.893363   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I1213 20:24:13.893468   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I1213 20:24:13.893952   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.894024   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36147
	I1213 20:24:13.893961   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.894530   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.894709   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.894722   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.894862   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.894876   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.895087   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.895103   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.895161   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.895204   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.895380   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.895776   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.895816   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.896005   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetState
	I1213 20:24:13.896278   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44887
	I1213 20:24:13.896384   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.896414   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.896800   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.897325   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.897345   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.897762   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.898269   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.898302   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.899617   79820 addons.go:234] Setting addon default-storageclass=true in "newest-cni-535459"
	W1213 20:24:13.899633   79820 addons.go:243] addon default-storageclass should already be in state true
	I1213 20:24:13.899663   79820 host.go:66] Checking if "newest-cni-535459" exists ...
	I1213 20:24:13.900022   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.900056   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.916023   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37017
	I1213 20:24:13.916600   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.916836   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I1213 20:24:13.917124   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.917139   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.917211   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.917661   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.917682   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.917755   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.917969   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetState
	I1213 20:24:13.918150   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.918406   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetState
	I1213 20:24:13.920502   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:24:13.921252   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:24:13.922950   79820 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 20:24:13.922980   79820 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 20:24:13.924173   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34681
	I1213 20:24:13.924523   79820 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 20:24:13.924543   79820 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 20:24:13.924561   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:24:13.924812   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.925357   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.925375   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.925880   79820 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1213 20:24:13.926431   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.926644   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetState
	I1213 20:24:13.927129   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 20:24:13.927146   79820 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 20:24:13.927165   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:24:13.929247   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:24:13.930886   79820 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:24:13.794975   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:13.809490   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:13.809563   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:13.845247   78367 cri.go:89] found id: ""
	I1213 20:24:13.845312   78367 logs.go:282] 0 containers: []
	W1213 20:24:13.845326   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:13.845337   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:13.845404   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:13.891111   78367 cri.go:89] found id: ""
	I1213 20:24:13.891155   78367 logs.go:282] 0 containers: []
	W1213 20:24:13.891167   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:13.891174   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:13.891225   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:13.944404   78367 cri.go:89] found id: ""
	I1213 20:24:13.944423   78367 logs.go:282] 0 containers: []
	W1213 20:24:13.944431   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:13.944438   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:13.944479   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:13.982745   78367 cri.go:89] found id: ""
	I1213 20:24:13.982766   78367 logs.go:282] 0 containers: []
	W1213 20:24:13.982773   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:13.982779   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:13.982823   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:14.018505   78367 cri.go:89] found id: ""
	I1213 20:24:14.018537   78367 logs.go:282] 0 containers: []
	W1213 20:24:14.018547   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:14.018555   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:14.018622   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:14.053196   78367 cri.go:89] found id: ""
	I1213 20:24:14.053222   78367 logs.go:282] 0 containers: []
	W1213 20:24:14.053233   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:14.053241   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:14.053305   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:14.085486   78367 cri.go:89] found id: ""
	I1213 20:24:14.085516   78367 logs.go:282] 0 containers: []
	W1213 20:24:14.085526   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:14.085534   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:14.085600   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:14.123930   78367 cri.go:89] found id: ""
	I1213 20:24:14.123958   78367 logs.go:282] 0 containers: []
	W1213 20:24:14.123968   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:14.123979   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:14.123993   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:14.184665   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:14.184705   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:14.207707   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:14.207742   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:14.317989   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:14.318017   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:14.318037   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:14.440228   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:14.440275   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:13.932098   79820 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:24:13.932112   79820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 20:24:13.932127   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:24:13.934949   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.934951   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:24:13.934975   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.934995   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:24:13.935008   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.935027   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:24:13.935077   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:24:13.935093   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.935143   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.935167   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:24:13.935181   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.935304   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:24:13.935319   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:24:13.935304   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:24:13.935471   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:24:13.935503   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:24:13.935535   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:24:13.935695   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:24:13.935709   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:24:13.935690   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:24:13.936047   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:24:13.940133   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34937
	I1213 20:24:13.940516   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.940964   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.940980   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.941375   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.941957   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.941999   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.965055   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32863
	I1213 20:24:13.966122   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.966772   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.966800   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.967221   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.967423   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetState
	I1213 20:24:13.969213   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:24:13.969387   79820 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 20:24:13.969404   79820 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 20:24:13.969424   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:24:13.971994   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.972410   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:24:13.972431   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.972569   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:24:13.972706   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:24:13.972834   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:24:13.972937   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:24:14.127383   79820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 20:24:14.156652   79820 api_server.go:52] waiting for apiserver process to appear ...
	I1213 20:24:14.156824   79820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:14.175603   79820 api_server.go:72] duration metric: took 303.674582ms to wait for apiserver process to appear ...
	I1213 20:24:14.175692   79820 api_server.go:88] waiting for apiserver healthz status ...
	I1213 20:24:14.175713   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:14.180066   79820 api_server.go:279] https://192.168.50.11:8443/healthz returned 200:
	ok
	I1213 20:24:14.181204   79820 api_server.go:141] control plane version: v1.31.2
	I1213 20:24:14.181224   79820 api_server.go:131] duration metric: took 5.524316ms to wait for apiserver health ...
	I1213 20:24:14.181240   79820 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 20:24:14.186870   79820 system_pods.go:59] 8 kube-system pods found
	I1213 20:24:14.186902   79820 system_pods.go:61] "coredns-7c65d6cfc9-q6mqc" [9f65c257-99b6-466f-91ae-9676625eb834] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 20:24:14.186913   79820 system_pods.go:61] "etcd-newest-cni-535459" [b491d2e0-2d34-4f0b-abf3-1d212ba9f422] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 20:24:14.186926   79820 system_pods.go:61] "kube-apiserver-newest-cni-535459" [6aeeeaed-b2ec-4c7d-ac94-215b57c0bd45] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 20:24:14.186935   79820 system_pods.go:61] "kube-controller-manager-newest-cni-535459" [51cd3848-17b3-493a-87db-d16192d55533] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 20:24:14.186942   79820 system_pods.go:61] "kube-proxy-msh9m" [e538f898-3a04-4e6f-bbf2-fc7fb13b43f4] Running
	I1213 20:24:14.186950   79820 system_pods.go:61] "kube-scheduler-newest-cni-535459" [90d47a04-6a40-461b-a19e-cc3d8a7b92ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 20:24:14.186958   79820 system_pods.go:61] "metrics-server-6867b74b74-29j2k" [cb907d37-be2a-4579-ba77-9c5add245ec1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 20:24:14.186963   79820 system_pods.go:61] "storage-provisioner" [de0598b8-996f-4307-b6c8-e81fa10d6f47] Running
	I1213 20:24:14.186970   79820 system_pods.go:74] duration metric: took 5.722864ms to wait for pod list to return data ...
	I1213 20:24:14.186978   79820 default_sa.go:34] waiting for default service account to be created ...
	I1213 20:24:14.191022   79820 default_sa.go:45] found service account: "default"
	I1213 20:24:14.191047   79820 default_sa.go:55] duration metric: took 4.057067ms for default service account to be created ...
	I1213 20:24:14.191062   79820 kubeadm.go:582] duration metric: took 319.136167ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 20:24:14.191078   79820 node_conditions.go:102] verifying NodePressure condition ...
	I1213 20:24:14.203724   79820 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 20:24:14.203754   79820 node_conditions.go:123] node cpu capacity is 2
	I1213 20:24:14.203765   79820 node_conditions.go:105] duration metric: took 12.682303ms to run NodePressure ...
	I1213 20:24:14.203779   79820 start.go:241] waiting for startup goroutines ...
	I1213 20:24:14.265979   79820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:24:14.322830   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 20:24:14.322892   79820 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 20:24:14.353048   79820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 20:24:14.355217   79820 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 20:24:14.355245   79820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 20:24:14.409641   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 20:24:14.409670   79820 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 20:24:14.425869   79820 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 20:24:14.425901   79820 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 20:24:14.489915   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 20:24:14.490017   79820 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 20:24:14.521997   79820 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 20:24:14.522024   79820 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 20:24:14.564655   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 20:24:14.564686   79820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 20:24:14.614041   79820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 20:24:14.641054   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 20:24:14.641084   79820 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 20:24:14.710567   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 20:24:14.710601   79820 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 20:24:14.745018   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 20:24:14.745055   79820 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 20:24:14.779553   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 20:24:14.779583   79820 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 20:24:14.893256   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 20:24:14.893286   79820 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 20:24:14.933845   79820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 20:24:16.576729   79820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.310647345s)
	I1213 20:24:16.576794   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.576808   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.576827   79820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.223742976s)
	I1213 20:24:16.576868   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.576885   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.576966   79820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.962891887s)
	I1213 20:24:16.576995   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.577005   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.578358   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Closing plugin on server side
	I1213 20:24:16.578370   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Closing plugin on server side
	I1213 20:24:16.578382   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.578394   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Closing plugin on server side
	I1213 20:24:16.578394   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.578402   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.578413   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.578421   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.578424   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.578430   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.578432   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.578442   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.578457   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.578404   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.578486   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.578697   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Closing plugin on server side
	I1213 20:24:16.578728   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.578743   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.578825   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.578853   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.578862   79820 addons.go:475] Verifying addon metrics-server=true in "newest-cni-535459"
	I1213 20:24:16.578921   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Closing plugin on server side
	I1213 20:24:16.578931   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.578944   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.624470   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.624501   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.624775   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.624793   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.847028   79820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.913138549s)
	I1213 20:24:16.847092   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.847111   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.847446   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.847467   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.847482   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.847491   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.847737   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.847764   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.849290   79820 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-535459 addons enable metrics-server
	
	I1213 20:24:16.850380   79820 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1213 20:24:16.851370   79820 addons.go:510] duration metric: took 2.979414999s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1213 20:24:16.851411   79820 start.go:246] waiting for cluster config update ...
	I1213 20:24:16.851425   79820 start.go:255] writing updated cluster config ...
	I1213 20:24:16.851676   79820 ssh_runner.go:195] Run: rm -f paused
	I1213 20:24:16.919885   79820 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1213 20:24:16.921326   79820 out.go:177] * Done! kubectl is now configured to use "newest-cni-535459" cluster and "default" namespace by default
	I1213 20:24:16.992002   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:17.010798   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:17.010887   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:17.054515   78367 cri.go:89] found id: ""
	I1213 20:24:17.054539   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.054548   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:17.054557   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:17.054608   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:17.106222   78367 cri.go:89] found id: ""
	I1213 20:24:17.106258   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.106269   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:17.106276   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:17.106328   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:17.145680   78367 cri.go:89] found id: ""
	I1213 20:24:17.145706   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.145713   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:17.145719   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:17.145772   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:17.183345   78367 cri.go:89] found id: ""
	I1213 20:24:17.183372   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.183383   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:17.183391   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:17.183440   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:17.218181   78367 cri.go:89] found id: ""
	I1213 20:24:17.218214   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.218226   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:17.218233   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:17.218308   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:17.260697   78367 cri.go:89] found id: ""
	I1213 20:24:17.260736   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.260747   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:17.260756   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:17.260815   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:17.296356   78367 cri.go:89] found id: ""
	I1213 20:24:17.296383   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.296394   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:17.296402   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:17.296452   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:17.332909   78367 cri.go:89] found id: ""
	I1213 20:24:17.332936   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.332946   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:17.332956   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:17.332979   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:17.400328   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:17.400361   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:17.419802   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:17.419836   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:17.508687   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:17.508709   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:17.508724   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:17.594401   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:17.594433   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:20.132881   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:20.151309   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:20.151382   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:20.185818   78367 cri.go:89] found id: ""
	I1213 20:24:20.185845   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.185854   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:20.185862   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:20.185913   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:20.227855   78367 cri.go:89] found id: ""
	I1213 20:24:20.227885   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.227895   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:20.227902   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:20.227957   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:20.265126   78367 cri.go:89] found id: ""
	I1213 20:24:20.265149   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.265158   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:20.265165   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:20.265215   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:20.303082   78367 cri.go:89] found id: ""
	I1213 20:24:20.303100   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.303106   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:20.303112   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:20.303148   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:20.334523   78367 cri.go:89] found id: ""
	I1213 20:24:20.334554   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.334565   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:20.334573   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:20.334634   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:20.367872   78367 cri.go:89] found id: ""
	I1213 20:24:20.367904   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.367915   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:20.367922   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:20.367972   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:20.401025   78367 cri.go:89] found id: ""
	I1213 20:24:20.401053   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.401063   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:20.401071   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:20.401118   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:20.437198   78367 cri.go:89] found id: ""
	I1213 20:24:20.437224   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.437232   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:20.437240   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:20.437252   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:20.491638   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:20.491670   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:20.507146   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:20.507176   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:20.586662   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:20.586708   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:20.586725   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:20.677650   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:20.677702   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:23.226457   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:23.240139   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:23.240197   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:23.276469   78367 cri.go:89] found id: ""
	I1213 20:24:23.276503   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.276514   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:23.276522   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:23.276576   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:23.321764   78367 cri.go:89] found id: ""
	I1213 20:24:23.321793   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.321804   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:23.321811   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:23.321860   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:23.355263   78367 cri.go:89] found id: ""
	I1213 20:24:23.355297   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.355308   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:23.355315   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:23.355368   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:23.396846   78367 cri.go:89] found id: ""
	I1213 20:24:23.396875   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.396885   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:23.396894   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:23.396955   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:23.435540   78367 cri.go:89] found id: ""
	I1213 20:24:23.435567   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.435578   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:23.435586   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:23.435634   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:23.473920   78367 cri.go:89] found id: ""
	I1213 20:24:23.473944   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.473959   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:23.473967   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:23.474023   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:23.507136   78367 cri.go:89] found id: ""
	I1213 20:24:23.507168   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.507177   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:23.507183   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:23.507239   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:23.539050   78367 cri.go:89] found id: ""
	I1213 20:24:23.539075   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.539083   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:23.539091   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:23.539104   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:23.553000   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:23.553026   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:23.619106   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:23.619128   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:23.619143   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:23.704028   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:23.704065   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:23.740575   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:23.740599   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:26.290469   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:26.303070   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:26.303114   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:26.333881   78367 cri.go:89] found id: ""
	I1213 20:24:26.333902   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.333909   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:26.333915   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:26.333957   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:26.367218   78367 cri.go:89] found id: ""
	I1213 20:24:26.367246   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.367253   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:26.367258   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:26.367314   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:26.397281   78367 cri.go:89] found id: ""
	I1213 20:24:26.397313   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.397325   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:26.397332   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:26.397388   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:26.429238   78367 cri.go:89] found id: ""
	I1213 20:24:26.429260   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.429270   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:26.429290   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:26.429335   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:26.457723   78367 cri.go:89] found id: ""
	I1213 20:24:26.457751   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.457760   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:26.457765   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:26.457820   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:26.487066   78367 cri.go:89] found id: ""
	I1213 20:24:26.487086   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.487093   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:26.487098   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:26.487153   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:26.517336   78367 cri.go:89] found id: ""
	I1213 20:24:26.517360   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.517367   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:26.517373   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:26.517428   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:26.547918   78367 cri.go:89] found id: ""
	I1213 20:24:26.547940   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.547947   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:26.547955   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:26.547966   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:26.614500   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:26.614527   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:26.614541   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:26.688954   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:26.688983   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:26.723430   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:26.723453   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:26.771679   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:26.771707   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:29.284113   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:29.296309   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:29.296365   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:29.335369   78367 cri.go:89] found id: ""
	I1213 20:24:29.335395   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.335404   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:29.335411   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:29.335477   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:29.364958   78367 cri.go:89] found id: ""
	I1213 20:24:29.364996   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.365005   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:29.365011   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:29.365056   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:29.395763   78367 cri.go:89] found id: ""
	I1213 20:24:29.395785   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.395792   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:29.395798   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:29.395847   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:29.426100   78367 cri.go:89] found id: ""
	I1213 20:24:29.426131   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.426141   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:29.426148   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:29.426212   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:29.454982   78367 cri.go:89] found id: ""
	I1213 20:24:29.455011   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.455018   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:29.455025   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:29.455086   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:29.490059   78367 cri.go:89] found id: ""
	I1213 20:24:29.490088   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.490098   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:29.490105   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:29.490164   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:29.523139   78367 cri.go:89] found id: ""
	I1213 20:24:29.523170   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.523179   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:29.523184   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:29.523235   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:29.553382   78367 cri.go:89] found id: ""
	I1213 20:24:29.553411   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.553422   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:29.553432   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:29.553445   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:29.603370   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:29.603399   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:29.615270   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:29.615296   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:29.676210   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:29.676241   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:29.676256   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:29.748591   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:29.748620   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:32.283657   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:32.295699   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:32.295770   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:32.326072   78367 cri.go:89] found id: ""
	I1213 20:24:32.326100   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.326109   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:32.326116   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:32.326174   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:32.359219   78367 cri.go:89] found id: ""
	I1213 20:24:32.359267   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.359279   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:32.359287   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:32.359374   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:32.389664   78367 cri.go:89] found id: ""
	I1213 20:24:32.389687   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.389694   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:32.389700   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:32.389756   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:32.419871   78367 cri.go:89] found id: ""
	I1213 20:24:32.419893   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.419899   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:32.419904   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:32.419955   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:32.449254   78367 cri.go:89] found id: ""
	I1213 20:24:32.449282   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.449292   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:32.449300   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:32.449359   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:32.477857   78367 cri.go:89] found id: ""
	I1213 20:24:32.477887   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.477897   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:32.477905   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:32.477965   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:32.507395   78367 cri.go:89] found id: ""
	I1213 20:24:32.507420   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.507429   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:32.507437   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:32.507493   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:32.536846   78367 cri.go:89] found id: ""
	I1213 20:24:32.536882   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.536894   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:32.536904   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:32.536918   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:32.586510   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:32.586540   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:32.598914   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:32.598941   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:32.661653   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:32.661673   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:32.661686   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:32.738149   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:32.738180   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:35.274525   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:35.287259   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:35.287338   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:35.321233   78367 cri.go:89] found id: ""
	I1213 20:24:35.321269   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.321280   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:35.321287   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:35.321350   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:35.351512   78367 cri.go:89] found id: ""
	I1213 20:24:35.351535   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.351543   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:35.351549   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:35.351607   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:35.380770   78367 cri.go:89] found id: ""
	I1213 20:24:35.380795   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.380805   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:35.380812   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:35.380868   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:35.410311   78367 cri.go:89] found id: ""
	I1213 20:24:35.410339   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.410348   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:35.410356   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:35.410410   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:35.437955   78367 cri.go:89] found id: ""
	I1213 20:24:35.437979   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.437987   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:35.437992   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:35.438039   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:35.467621   78367 cri.go:89] found id: ""
	I1213 20:24:35.467646   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.467657   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:35.467665   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:35.467729   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:35.496779   78367 cri.go:89] found id: ""
	I1213 20:24:35.496801   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.496809   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:35.496814   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:35.496867   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:35.527107   78367 cri.go:89] found id: ""
	I1213 20:24:35.527140   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.527148   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:35.527157   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:35.527167   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:35.573444   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:35.573472   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:35.586107   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:35.586129   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:35.647226   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:35.647249   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:35.647261   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:35.721264   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:35.721297   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:38.256983   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:38.269600   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:38.269665   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:38.304526   78367 cri.go:89] found id: ""
	I1213 20:24:38.304552   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.304559   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:38.304566   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:38.304621   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:38.334858   78367 cri.go:89] found id: ""
	I1213 20:24:38.334885   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.334896   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:38.334902   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:38.334959   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:38.364281   78367 cri.go:89] found id: ""
	I1213 20:24:38.364305   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.364312   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:38.364318   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:38.364364   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:38.393853   78367 cri.go:89] found id: ""
	I1213 20:24:38.393878   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.393886   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:38.393892   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:38.393936   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:38.424196   78367 cri.go:89] found id: ""
	I1213 20:24:38.424225   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.424234   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:38.424241   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:38.424305   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:38.454285   78367 cri.go:89] found id: ""
	I1213 20:24:38.454311   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.454322   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:38.454330   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:38.454382   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:38.483158   78367 cri.go:89] found id: ""
	I1213 20:24:38.483187   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.483194   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:38.483199   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:38.483250   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:38.512116   78367 cri.go:89] found id: ""
	I1213 20:24:38.512149   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.512161   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:38.512172   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:38.512186   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:38.587026   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:38.587053   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:38.587069   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:38.661024   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:38.661055   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:38.695893   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:38.695922   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:38.746253   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:38.746282   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:41.258578   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:41.271632   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:41.271691   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:41.303047   78367 cri.go:89] found id: ""
	I1213 20:24:41.303073   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.303081   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:41.303087   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:41.303149   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:41.334605   78367 cri.go:89] found id: ""
	I1213 20:24:41.334642   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.334653   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:41.334662   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:41.334714   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:41.367617   78367 cri.go:89] found id: ""
	I1213 20:24:41.367650   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.367661   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:41.367670   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:41.367724   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:41.399772   78367 cri.go:89] found id: ""
	I1213 20:24:41.399800   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.399811   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:41.399819   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:41.399880   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:41.431833   78367 cri.go:89] found id: ""
	I1213 20:24:41.431869   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.431879   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:41.431887   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:41.431948   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:41.462640   78367 cri.go:89] found id: ""
	I1213 20:24:41.462669   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.462679   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:41.462688   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:41.462757   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:41.492716   78367 cri.go:89] found id: ""
	I1213 20:24:41.492748   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.492758   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:41.492764   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:41.492823   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:41.527697   78367 cri.go:89] found id: ""
	I1213 20:24:41.527729   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.527739   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:41.527750   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:41.527763   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:41.540507   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:41.540530   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:41.602837   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:41.602873   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:41.602888   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:41.676818   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:41.676855   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:41.713699   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:41.713731   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:44.263397   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:44.275396   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:44.275463   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:44.306065   78367 cri.go:89] found id: ""
	I1213 20:24:44.306095   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.306106   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:44.306114   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:44.306170   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:44.336701   78367 cri.go:89] found id: ""
	I1213 20:24:44.336734   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.336746   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:44.336754   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:44.336803   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:44.367523   78367 cri.go:89] found id: ""
	I1213 20:24:44.367553   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.367564   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:44.367571   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:44.367626   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:44.397934   78367 cri.go:89] found id: ""
	I1213 20:24:44.397960   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.397970   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:44.397978   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:44.398043   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:44.428770   78367 cri.go:89] found id: ""
	I1213 20:24:44.428799   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.428810   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:44.428817   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:44.428874   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:44.459961   78367 cri.go:89] found id: ""
	I1213 20:24:44.459999   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.460011   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:44.460018   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:44.460068   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:44.491377   78367 cri.go:89] found id: ""
	I1213 20:24:44.491407   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.491419   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:44.491426   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:44.491488   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:44.521764   78367 cri.go:89] found id: ""
	I1213 20:24:44.521798   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.521808   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:44.521819   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:44.521835   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:44.584292   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:44.584316   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:44.584328   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:44.654841   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:44.654880   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:44.689572   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:44.689598   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:44.738234   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:44.738265   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:47.250759   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:47.262717   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:47.262786   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:47.291884   78367 cri.go:89] found id: ""
	I1213 20:24:47.291910   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.291917   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:47.291923   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:47.291968   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:47.322010   78367 cri.go:89] found id: ""
	I1213 20:24:47.322036   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.322047   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:47.322056   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:47.322114   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:47.352441   78367 cri.go:89] found id: ""
	I1213 20:24:47.352470   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.352478   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:47.352483   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:47.352535   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:47.382622   78367 cri.go:89] found id: ""
	I1213 20:24:47.382646   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.382653   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:47.382659   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:47.382709   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:47.413127   78367 cri.go:89] found id: ""
	I1213 20:24:47.413149   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.413156   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:47.413161   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:47.413212   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:47.445397   78367 cri.go:89] found id: ""
	I1213 20:24:47.445423   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.445430   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:47.445435   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:47.445483   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:47.475871   78367 cri.go:89] found id: ""
	I1213 20:24:47.475897   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.475904   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:47.475910   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:47.475966   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:47.505357   78367 cri.go:89] found id: ""
	I1213 20:24:47.505382   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.505389   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:47.505397   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:47.505407   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:47.568960   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:47.568982   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:47.569010   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:47.646228   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:47.646262   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:47.679590   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:47.679616   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:47.726854   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:47.726884   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:50.239188   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:50.251010   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:50.251061   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:50.281168   78367 cri.go:89] found id: ""
	I1213 20:24:50.281194   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.281204   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:50.281211   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:50.281277   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:50.310396   78367 cri.go:89] found id: ""
	I1213 20:24:50.310421   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.310431   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:50.310438   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:50.310491   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:50.340824   78367 cri.go:89] found id: ""
	I1213 20:24:50.340856   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.340866   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:50.340873   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:50.340937   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:50.377401   78367 cri.go:89] found id: ""
	I1213 20:24:50.377430   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.377437   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:50.377443   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:50.377500   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:50.406521   78367 cri.go:89] found id: ""
	I1213 20:24:50.406552   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.406562   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:50.406567   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:50.406632   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:50.440070   78367 cri.go:89] found id: ""
	I1213 20:24:50.440101   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.440112   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:50.440118   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:50.440168   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:50.473103   78367 cri.go:89] found id: ""
	I1213 20:24:50.473134   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.473145   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:50.473152   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:50.473218   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:50.503787   78367 cri.go:89] found id: ""
	I1213 20:24:50.503815   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.503824   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:50.503832   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:50.503842   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:50.551379   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:50.551407   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:50.563705   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:50.563732   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:50.625016   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:50.625046   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:50.625062   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:50.717566   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:50.717601   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:53.254296   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:53.266940   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:53.266995   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:53.302975   78367 cri.go:89] found id: ""
	I1213 20:24:53.303000   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.303008   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:53.303013   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:53.303080   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:53.338434   78367 cri.go:89] found id: ""
	I1213 20:24:53.338461   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.338469   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:53.338474   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:53.338526   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:53.375117   78367 cri.go:89] found id: ""
	I1213 20:24:53.375146   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.375156   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:53.375164   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:53.375221   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:53.413376   78367 cri.go:89] found id: ""
	I1213 20:24:53.413406   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.413416   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:53.413423   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:53.413482   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:53.447697   78367 cri.go:89] found id: ""
	I1213 20:24:53.447725   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.447736   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:53.447743   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:53.447802   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:53.480987   78367 cri.go:89] found id: ""
	I1213 20:24:53.481019   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.481037   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:53.481045   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:53.481149   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:53.516573   78367 cri.go:89] found id: ""
	I1213 20:24:53.516602   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.516611   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:53.516617   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:53.516664   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:53.552098   78367 cri.go:89] found id: ""
	I1213 20:24:53.552128   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.552144   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:53.552155   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:53.552168   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:53.632362   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:53.632393   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:53.667030   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:53.667061   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:53.716328   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:53.716355   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:53.730194   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:53.730219   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:53.804612   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:56.305032   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:56.317875   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:56.317934   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:56.353004   78367 cri.go:89] found id: ""
	I1213 20:24:56.353027   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.353035   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:56.353040   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:56.353086   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:56.398694   78367 cri.go:89] found id: ""
	I1213 20:24:56.398722   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.398731   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:56.398739   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:56.398800   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:56.430481   78367 cri.go:89] found id: ""
	I1213 20:24:56.430512   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.430523   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:56.430530   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:56.430589   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:56.460467   78367 cri.go:89] found id: ""
	I1213 20:24:56.460501   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.460512   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:56.460520   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:56.460583   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:56.490776   78367 cri.go:89] found id: ""
	I1213 20:24:56.490804   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.490814   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:56.490822   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:56.490889   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:56.520440   78367 cri.go:89] found id: ""
	I1213 20:24:56.520466   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.520473   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:56.520478   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:56.520525   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:56.550233   78367 cri.go:89] found id: ""
	I1213 20:24:56.550258   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.550266   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:56.550271   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:56.550347   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:56.580651   78367 cri.go:89] found id: ""
	I1213 20:24:56.580681   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.580692   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:56.580703   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:56.580716   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:56.650811   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:56.650839   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:56.650892   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:56.728061   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:56.728089   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:56.767782   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:56.767809   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:56.818747   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:56.818781   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:59.331474   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:59.344319   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:59.344379   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:59.373901   78367 cri.go:89] found id: ""
	I1213 20:24:59.373931   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.373941   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:59.373947   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:59.373999   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:59.405800   78367 cri.go:89] found id: ""
	I1213 20:24:59.405832   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.405844   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:59.405851   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:59.405922   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:59.435487   78367 cri.go:89] found id: ""
	I1213 20:24:59.435517   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.435527   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:59.435535   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:59.435587   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:59.466466   78367 cri.go:89] found id: ""
	I1213 20:24:59.466489   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.466497   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:59.466502   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:59.466543   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:59.500301   78367 cri.go:89] found id: ""
	I1213 20:24:59.500330   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.500337   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:59.500342   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:59.500387   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:59.532614   78367 cri.go:89] found id: ""
	I1213 20:24:59.532642   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.532651   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:59.532658   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:59.532717   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:59.562990   78367 cri.go:89] found id: ""
	I1213 20:24:59.563013   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.563020   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:59.563034   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:59.563078   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:59.593335   78367 cri.go:89] found id: ""
	I1213 20:24:59.593366   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.593376   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:59.593386   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:59.593401   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:59.659058   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:59.659083   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:59.659097   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:59.733569   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:59.733600   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:59.770151   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:59.770178   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:59.820506   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:59.820534   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:02.334083   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:02.346559   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:02.346714   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:02.380346   78367 cri.go:89] found id: ""
	I1213 20:25:02.380376   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.380384   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:02.380390   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:02.380441   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:02.412347   78367 cri.go:89] found id: ""
	I1213 20:25:02.412374   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.412385   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:02.412392   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:02.412453   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:02.443408   78367 cri.go:89] found id: ""
	I1213 20:25:02.443441   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.443453   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:02.443461   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:02.443514   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:02.474165   78367 cri.go:89] found id: ""
	I1213 20:25:02.474193   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.474201   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:02.474206   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:02.474272   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:02.505076   78367 cri.go:89] found id: ""
	I1213 20:25:02.505109   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.505121   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:02.505129   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:02.505186   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:02.541145   78367 cri.go:89] found id: ""
	I1213 20:25:02.541174   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.541182   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:02.541187   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:02.541236   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:02.579150   78367 cri.go:89] found id: ""
	I1213 20:25:02.579183   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.579194   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:02.579201   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:02.579262   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:02.611542   78367 cri.go:89] found id: ""
	I1213 20:25:02.611582   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.611594   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:02.611607   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:02.611620   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:02.661145   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:02.661183   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:02.673918   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:02.673944   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:02.745321   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:02.745345   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:02.745358   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:02.820953   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:02.820992   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:05.373838   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:05.386758   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:05.386833   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:05.419177   78367 cri.go:89] found id: ""
	I1213 20:25:05.419205   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.419215   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:05.419223   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:05.419292   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:05.450595   78367 cri.go:89] found id: ""
	I1213 20:25:05.450628   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.450639   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:05.450648   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:05.450707   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:05.481818   78367 cri.go:89] found id: ""
	I1213 20:25:05.481844   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.481852   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:05.481857   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:05.481902   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:05.517195   78367 cri.go:89] found id: ""
	I1213 20:25:05.517230   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.517239   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:05.517246   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:05.517302   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:05.548698   78367 cri.go:89] found id: ""
	I1213 20:25:05.548733   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.548744   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:05.548753   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:05.548811   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:05.579983   78367 cri.go:89] found id: ""
	I1213 20:25:05.580009   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.580015   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:05.580022   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:05.580070   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:05.610660   78367 cri.go:89] found id: ""
	I1213 20:25:05.610685   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.610693   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:05.610699   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:05.610750   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:05.641572   78367 cri.go:89] found id: ""
	I1213 20:25:05.641598   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.641605   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:05.641614   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:05.641625   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:05.712243   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:05.712264   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:05.712275   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:05.793232   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:05.793271   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:05.827863   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:05.827901   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:05.877641   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:05.877671   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:08.390425   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:08.402888   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:08.402944   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:08.436903   78367 cri.go:89] found id: ""
	I1213 20:25:08.436931   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.436941   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:08.436948   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:08.437005   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:08.469526   78367 cri.go:89] found id: ""
	I1213 20:25:08.469561   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.469574   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:08.469581   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:08.469644   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:08.500136   78367 cri.go:89] found id: ""
	I1213 20:25:08.500165   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.500172   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:08.500178   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:08.500223   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:08.537556   78367 cri.go:89] found id: ""
	I1213 20:25:08.537591   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.537603   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:08.537611   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:08.537669   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:08.577468   78367 cri.go:89] found id: ""
	I1213 20:25:08.577492   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.577501   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:08.577509   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:08.577566   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:08.632075   78367 cri.go:89] found id: ""
	I1213 20:25:08.632103   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.632113   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:08.632120   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:08.632178   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:08.671119   78367 cri.go:89] found id: ""
	I1213 20:25:08.671148   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.671158   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:08.671166   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:08.671225   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:08.700873   78367 cri.go:89] found id: ""
	I1213 20:25:08.700900   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.700908   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:08.700916   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:08.700927   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:08.713084   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:08.713107   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:08.780299   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:08.780331   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:08.780346   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:08.851830   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:08.851865   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:08.886834   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:08.886883   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:11.435256   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:11.447096   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:11.447155   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:11.477376   78367 cri.go:89] found id: ""
	I1213 20:25:11.477403   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.477411   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:11.477416   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:11.477460   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:11.507532   78367 cri.go:89] found id: ""
	I1213 20:25:11.507564   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.507572   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:11.507582   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:11.507628   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:11.537352   78367 cri.go:89] found id: ""
	I1213 20:25:11.537383   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.537393   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:11.537400   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:11.537450   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:11.567653   78367 cri.go:89] found id: ""
	I1213 20:25:11.567681   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.567693   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:11.567700   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:11.567756   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:11.597752   78367 cri.go:89] found id: ""
	I1213 20:25:11.597782   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.597790   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:11.597795   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:11.597840   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:11.626231   78367 cri.go:89] found id: ""
	I1213 20:25:11.626258   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.626269   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:11.626276   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:11.626334   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:11.655694   78367 cri.go:89] found id: ""
	I1213 20:25:11.655724   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.655733   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:11.655740   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:11.655794   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:11.685714   78367 cri.go:89] found id: ""
	I1213 20:25:11.685742   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.685750   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:11.685758   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:11.685768   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:11.733749   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:11.733774   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:11.746307   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:11.746330   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:11.807168   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:11.807190   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:11.807202   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:11.878490   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:11.878522   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:14.416516   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:14.428258   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:14.428339   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:14.458229   78367 cri.go:89] found id: ""
	I1213 20:25:14.458255   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.458263   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:14.458272   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:14.458326   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:14.488061   78367 cri.go:89] found id: ""
	I1213 20:25:14.488101   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.488109   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:14.488114   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:14.488159   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:14.516854   78367 cri.go:89] found id: ""
	I1213 20:25:14.516880   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.516888   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:14.516893   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:14.516953   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:14.549881   78367 cri.go:89] found id: ""
	I1213 20:25:14.549908   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.549919   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:14.549925   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:14.549982   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:14.579410   78367 cri.go:89] found id: ""
	I1213 20:25:14.579439   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.579449   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:14.579457   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:14.579507   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:14.609126   78367 cri.go:89] found id: ""
	I1213 20:25:14.609155   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.609163   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:14.609169   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:14.609216   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:14.638655   78367 cri.go:89] found id: ""
	I1213 20:25:14.638682   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.638689   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:14.638694   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:14.638739   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:14.667950   78367 cri.go:89] found id: ""
	I1213 20:25:14.667977   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.667986   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:14.667997   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:14.668011   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:14.705223   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:14.705250   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:14.753645   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:14.753671   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:14.766082   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:14.766106   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:14.826802   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:14.826829   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:14.826841   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:17.400518   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:17.412464   78367 kubeadm.go:597] duration metric: took 4m2.435244002s to restartPrimaryControlPlane
	W1213 20:25:17.412536   78367 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 20:25:17.412564   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 20:25:19.422149   78367 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.009561199s)
	I1213 20:25:19.422215   78367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:25:19.435431   78367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 20:25:19.444465   78367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:25:19.452996   78367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:25:19.453011   78367 kubeadm.go:157] found existing configuration files:
	
	I1213 20:25:19.453051   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:25:19.461055   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:25:19.461096   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:25:19.469525   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:25:19.477399   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:25:19.477442   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:25:19.485719   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:25:19.493837   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:25:19.493895   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:25:19.502493   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:25:19.510479   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:25:19.510525   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:25:19.518746   78367 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 20:25:19.585664   78367 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1213 20:25:19.585781   78367 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 20:25:19.709117   78367 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 20:25:19.709242   78367 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 20:25:19.709362   78367 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 20:25:19.865449   78367 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 20:25:19.867503   78367 out.go:235]   - Generating certificates and keys ...
	I1213 20:25:19.867605   78367 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 20:25:19.867668   78367 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 20:25:19.867759   78367 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 20:25:19.867864   78367 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1213 20:25:19.867978   78367 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 20:25:19.868062   78367 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1213 20:25:19.868159   78367 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1213 20:25:19.868251   78367 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1213 20:25:19.868515   78367 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 20:25:19.868889   78367 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 20:25:19.869062   78367 kubeadm.go:310] [certs] Using the existing "sa" key
	I1213 20:25:19.869157   78367 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 20:25:19.955108   78367 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 20:25:20.380950   78367 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 20:25:20.496704   78367 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 20:25:20.598530   78367 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 20:25:20.612045   78367 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 20:25:20.613742   78367 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 20:25:20.613809   78367 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 20:25:20.733629   78367 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 20:25:20.735476   78367 out.go:235]   - Booting up control plane ...
	I1213 20:25:20.735586   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 20:25:20.739585   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 20:25:20.740414   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 20:25:20.741056   78367 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 20:25:20.743491   78367 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 20:26:00.744556   78367 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1213 20:26:00.745298   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:26:00.745523   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:26:05.746023   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:26:05.746244   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:26:15.746586   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:26:15.746767   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:26:35.747606   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:26:35.747803   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:27:15.749327   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:27:15.749616   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:27:15.749642   78367 kubeadm.go:310] 
	I1213 20:27:15.749705   78367 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1213 20:27:15.749763   78367 kubeadm.go:310] 		timed out waiting for the condition
	I1213 20:27:15.749771   78367 kubeadm.go:310] 
	I1213 20:27:15.749801   78367 kubeadm.go:310] 	This error is likely caused by:
	I1213 20:27:15.749858   78367 kubeadm.go:310] 		- The kubelet is not running
	I1213 20:27:15.749970   78367 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 20:27:15.749978   78367 kubeadm.go:310] 
	I1213 20:27:15.750116   78367 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 20:27:15.750147   78367 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1213 20:27:15.750175   78367 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1213 20:27:15.750182   78367 kubeadm.go:310] 
	I1213 20:27:15.750323   78367 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1213 20:27:15.750445   78367 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1213 20:27:15.750469   78367 kubeadm.go:310] 
	I1213 20:27:15.750594   78367 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1213 20:27:15.750679   78367 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1213 20:27:15.750750   78367 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1213 20:27:15.750838   78367 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1213 20:27:15.750867   78367 kubeadm.go:310] 
	I1213 20:27:15.751901   78367 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 20:27:15.752044   78367 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1213 20:27:15.752128   78367 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1213 20:27:15.752253   78367 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 20:27:15.752296   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 20:27:16.207985   78367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:27:16.221729   78367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:27:16.230896   78367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:27:16.230915   78367 kubeadm.go:157] found existing configuration files:
	
	I1213 20:27:16.230963   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:27:16.239780   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:27:16.239853   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:27:16.248841   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:27:16.257494   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:27:16.257547   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:27:16.266220   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:27:16.274395   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:27:16.274446   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:27:16.282941   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:27:16.291155   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:27:16.291206   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:27:16.299780   78367 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 20:27:16.492967   78367 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 20:29:12.537014   78367 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1213 20:29:12.537124   78367 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1213 20:29:12.538949   78367 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1213 20:29:12.539024   78367 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 20:29:12.539128   78367 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 20:29:12.539224   78367 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 20:29:12.539305   78367 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 20:29:12.539357   78367 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 20:29:12.540964   78367 out.go:235]   - Generating certificates and keys ...
	I1213 20:29:12.541051   78367 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 20:29:12.541164   78367 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 20:29:12.541297   78367 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 20:29:12.541385   78367 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1213 20:29:12.541510   78367 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 20:29:12.541593   78367 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1213 20:29:12.541696   78367 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1213 20:29:12.541764   78367 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1213 20:29:12.541825   78367 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 20:29:12.541886   78367 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 20:29:12.541918   78367 kubeadm.go:310] [certs] Using the existing "sa" key
	I1213 20:29:12.541993   78367 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 20:29:12.542062   78367 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 20:29:12.542141   78367 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 20:29:12.542249   78367 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 20:29:12.542337   78367 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 20:29:12.542454   78367 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 20:29:12.542564   78367 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 20:29:12.542608   78367 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 20:29:12.542689   78367 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 20:29:12.544295   78367 out.go:235]   - Booting up control plane ...
	I1213 20:29:12.544374   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 20:29:12.544440   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 20:29:12.544496   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 20:29:12.544566   78367 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 20:29:12.544708   78367 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 20:29:12.544763   78367 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1213 20:29:12.544822   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.544980   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545046   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.545210   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545282   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.545456   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545529   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.545681   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545742   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.545910   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545920   78367 kubeadm.go:310] 
	I1213 20:29:12.545956   78367 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1213 20:29:12.545989   78367 kubeadm.go:310] 		timed out waiting for the condition
	I1213 20:29:12.545999   78367 kubeadm.go:310] 
	I1213 20:29:12.546026   78367 kubeadm.go:310] 	This error is likely caused by:
	I1213 20:29:12.546053   78367 kubeadm.go:310] 		- The kubelet is not running
	I1213 20:29:12.546145   78367 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 20:29:12.546153   78367 kubeadm.go:310] 
	I1213 20:29:12.546246   78367 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 20:29:12.546317   78367 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1213 20:29:12.546377   78367 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1213 20:29:12.546386   78367 kubeadm.go:310] 
	I1213 20:29:12.546485   78367 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1213 20:29:12.546561   78367 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1213 20:29:12.546568   78367 kubeadm.go:310] 
	I1213 20:29:12.546677   78367 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1213 20:29:12.546761   78367 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1213 20:29:12.546831   78367 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1213 20:29:12.546913   78367 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1213 20:29:12.546942   78367 kubeadm.go:310] 
	I1213 20:29:12.546976   78367 kubeadm.go:394] duration metric: took 7m57.617019103s to StartCluster
	I1213 20:29:12.547025   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:29:12.547089   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:29:12.589567   78367 cri.go:89] found id: ""
	I1213 20:29:12.589592   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.589599   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:29:12.589605   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:29:12.589660   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:29:12.621414   78367 cri.go:89] found id: ""
	I1213 20:29:12.621438   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.621445   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:29:12.621450   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:29:12.621510   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:29:12.652624   78367 cri.go:89] found id: ""
	I1213 20:29:12.652655   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.652666   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:29:12.652674   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:29:12.652739   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:29:12.682651   78367 cri.go:89] found id: ""
	I1213 20:29:12.682683   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.682693   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:29:12.682701   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:29:12.682767   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:29:12.714100   78367 cri.go:89] found id: ""
	I1213 20:29:12.714127   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.714134   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:29:12.714140   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:29:12.714194   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:29:12.745402   78367 cri.go:89] found id: ""
	I1213 20:29:12.745436   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.745446   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:29:12.745454   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:29:12.745515   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:29:12.775916   78367 cri.go:89] found id: ""
	I1213 20:29:12.775942   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.775949   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:29:12.775954   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:29:12.776009   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:29:12.806128   78367 cri.go:89] found id: ""
	I1213 20:29:12.806161   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.806171   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:29:12.806183   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:29:12.806197   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:29:12.841122   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:29:12.841151   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:29:12.888169   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:29:12.888203   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:29:12.900707   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:29:12.900733   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:29:12.969370   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:29:12.969408   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:29:12.969423   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 20:29:13.074903   78367 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1213 20:29:13.074961   78367 out.go:270] * 
	W1213 20:29:13.075016   78367 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 20:29:13.075034   78367 out.go:270] * 
	W1213 20:29:13.075878   78367 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 20:29:13.079429   78367 out.go:201] 
	W1213 20:29:13.080898   78367 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 20:29:13.080953   78367 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 20:29:13.080984   78367 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 20:29:13.082622   78367 out.go:201] 
	
	
	==> CRI-O <==
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.856083270Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734122295856065227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5209fecd-d665-4445-9073-f464c5045ae1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.856481388Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb382bb3-71f6-432e-ba08-dbc829104ddf name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.856543903Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb382bb3-71f6-432e-ba08-dbc829104ddf name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.856579833Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fb382bb3-71f6-432e-ba08-dbc829104ddf name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.884682292Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10b3c19a-5baf-4ac9-916e-eb577adbe6cc name=/runtime.v1.RuntimeService/Version
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.884766266Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10b3c19a-5baf-4ac9-916e-eb577adbe6cc name=/runtime.v1.RuntimeService/Version
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.885493097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=801c9696-0eee-45a0-9a43-d5676b4d89b9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.885949845Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734122295885930602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=801c9696-0eee-45a0-9a43-d5676b4d89b9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.886390124Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80f0bb41-e6ef-425a-a7c1-38bcd85b9960 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.886455129Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80f0bb41-e6ef-425a-a7c1-38bcd85b9960 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.886500575Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=80f0bb41-e6ef-425a-a7c1-38bcd85b9960 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.914051759Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9514e6b2-baf7-4f1f-8133-ea4e1cece8b3 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.914127650Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9514e6b2-baf7-4f1f-8133-ea4e1cece8b3 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.915293002Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=423b7414-5025-4a9f-8553-075748aa797a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.915728330Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734122295915709003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=423b7414-5025-4a9f-8553-075748aa797a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.916143984Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d20dd662-77ff-407f-bbd4-2eb58f7894ad name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.916190532Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d20dd662-77ff-407f-bbd4-2eb58f7894ad name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.916220363Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d20dd662-77ff-407f-bbd4-2eb58f7894ad name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.943497840Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9e82efd-d5db-4fa5-80b1-0d797acb3781 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.943583997Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9e82efd-d5db-4fa5-80b1-0d797acb3781 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.944312574Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9a33a285-e064-4756-9eb7-94d1a387a515 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.944813793Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734122295944783863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a33a285-e064-4756-9eb7-94d1a387a515 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.945223506Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4bd7e779-d96e-4ab2-a242-ea087555d69b name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.945271064Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4bd7e779-d96e-4ab2-a242-ea087555d69b name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:38:15 old-k8s-version-613355 crio[625]: time="2024-12-13 20:38:15.945323622Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4bd7e779-d96e-4ab2-a242-ea087555d69b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 20:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060967] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039950] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.018359] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.144058] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.571428] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec13 20:21] systemd-fstab-generator[552]: Ignoring "noauto" option for root device
	[  +0.064800] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055429] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.157241] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.148226] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.222516] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +6.266047] systemd-fstab-generator[871]: Ignoring "noauto" option for root device
	[  +0.062703] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.713915] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[ +12.418230] kauditd_printk_skb: 46 callbacks suppressed
	[Dec13 20:25] systemd-fstab-generator[5046]: Ignoring "noauto" option for root device
	[Dec13 20:27] systemd-fstab-generator[5322]: Ignoring "noauto" option for root device
	[  +0.061209] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:38:16 up 17 min,  0 users,  load average: 0.07, 0.06, 0.07
	Linux old-k8s-version-613355 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 13 20:38:12 old-k8s-version-613355 kubelet[6496]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0002541c0, 0xc000dc1320, 0x1, 0x0, 0x0)
	Dec 13 20:38:12 old-k8s-version-613355 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Dec 13 20:38:12 old-k8s-version-613355 kubelet[6496]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000350fc0)
	Dec 13 20:38:12 old-k8s-version-613355 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Dec 13 20:38:12 old-k8s-version-613355 kubelet[6496]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Dec 13 20:38:12 old-k8s-version-613355 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Dec 13 20:38:12 old-k8s-version-613355 kubelet[6496]: goroutine 128 [runnable]:
	Dec 13 20:38:12 old-k8s-version-613355 kubelet[6496]: runtime.Gosched(...)
	Dec 13 20:38:12 old-k8s-version-613355 kubelet[6496]:         /usr/local/go/src/runtime/proc.go:271
	Dec 13 20:38:12 old-k8s-version-613355 kubelet[6496]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001d83c0, 0x0, 0x0)
	Dec 13 20:38:12 old-k8s-version-613355 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:549 +0x1a5
	Dec 13 20:38:12 old-k8s-version-613355 kubelet[6496]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000350fc0)
	Dec 13 20:38:12 old-k8s-version-613355 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Dec 13 20:38:12 old-k8s-version-613355 kubelet[6496]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Dec 13 20:38:12 old-k8s-version-613355 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Dec 13 20:38:12 old-k8s-version-613355 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 13 20:38:12 old-k8s-version-613355 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 20:38:13 old-k8s-version-613355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Dec 13 20:38:13 old-k8s-version-613355 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 13 20:38:13 old-k8s-version-613355 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 13 20:38:13 old-k8s-version-613355 kubelet[6504]: I1213 20:38:13.624353    6504 server.go:416] Version: v1.20.0
	Dec 13 20:38:13 old-k8s-version-613355 kubelet[6504]: I1213 20:38:13.624618    6504 server.go:837] Client rotation is on, will bootstrap in background
	Dec 13 20:38:13 old-k8s-version-613355 kubelet[6504]: I1213 20:38:13.626282    6504 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 13 20:38:13 old-k8s-version-613355 kubelet[6504]: W1213 20:38:13.627128    6504 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 13 20:38:13 old-k8s-version-613355 kubelet[6504]: I1213 20:38:13.627205    6504 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-613355 -n old-k8s-version-613355
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-613355 -n old-k8s-version-613355: exit status 2 (215.882818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-613355" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (335.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:38:24.127676   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:39:01.060042   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:39:41.598626   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:39:44.010335   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:40:23.338278   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:40:41.726078   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:40:55.165268   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:41:28.029027   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/no-preload-475934/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:41:47.105418   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/default-k8s-diff-port-355668/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:42:27.499830   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:42:51.093204   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/no-preload-475934/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:43:03.112706   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:43:10.172506   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/default-k8s-diff-port-355668/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:43:24.127261   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
E1213 20:43:44.797250   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.134:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.134:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-613355 -n old-k8s-version-613355
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-613355 -n old-k8s-version-613355: exit status 2 (224.000393ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-613355" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-613355 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-613355 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.585µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-613355 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-613355 -n old-k8s-version-613355
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-613355 -n old-k8s-version-613355: exit status 2 (210.7066ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-613355 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-191190 image list                          | embed-certs-191190           | jenkins | v1.34.0 | 13 Dec 24 20:22 UTC | 13 Dec 24 20:22 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-191190                                  | embed-certs-191190           | jenkins | v1.34.0 | 13 Dec 24 20:22 UTC | 13 Dec 24 20:22 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-191190                                  | embed-certs-191190           | jenkins | v1.34.0 | 13 Dec 24 20:22 UTC | 13 Dec 24 20:22 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-191190                                  | embed-certs-191190           | jenkins | v1.34.0 | 13 Dec 24 20:22 UTC | 13 Dec 24 20:22 UTC |
	| delete  | -p embed-certs-191190                                  | embed-certs-191190           | jenkins | v1.34.0 | 13 Dec 24 20:22 UTC | 13 Dec 24 20:22 UTC |
	| start   | -p newest-cni-535459 --memory=2200 --alsologtostderr   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:22 UTC | 13 Dec 24 20:23 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-535459             | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:23 UTC | 13 Dec 24 20:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-535459                                   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:23 UTC | 13 Dec 24 20:23 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-535459                  | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:23 UTC | 13 Dec 24 20:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-535459 --memory=2200 --alsologtostderr   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:23 UTC | 13 Dec 24 20:24 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | no-preload-475934 image list                           | no-preload-475934            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-475934                                   | no-preload-475934            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-475934                                   | no-preload-475934            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| image   | newest-cni-535459 image list                           | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-535459                                   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-475934                                   | no-preload-475934            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	| delete  | -p no-preload-475934                                   | no-preload-475934            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	| unpause | -p newest-cni-535459                                   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-535459                                   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	| image   | default-k8s-diff-port-355668                           | default-k8s-diff-port-355668 | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-355668 | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | default-k8s-diff-port-355668                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-535459                                   | newest-cni-535459            | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	| unpause | -p                                                     | default-k8s-diff-port-355668 | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | default-k8s-diff-port-355668                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-355668 | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | default-k8s-diff-port-355668                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-355668 | jenkins | v1.34.0 | 13 Dec 24 20:24 UTC | 13 Dec 24 20:24 UTC |
	|         | default-k8s-diff-port-355668                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 20:23:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 20:23:38.197995   79820 out.go:345] Setting OutFile to fd 1 ...
	I1213 20:23:38.198359   79820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:23:38.198412   79820 out.go:358] Setting ErrFile to fd 2...
	I1213 20:23:38.198430   79820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:23:38.198912   79820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
	I1213 20:23:38.199937   79820 out.go:352] Setting JSON to false
	I1213 20:23:38.200882   79820 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7561,"bootTime":1734113857,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 20:23:38.200969   79820 start.go:139] virtualization: kvm guest
	I1213 20:23:38.202746   79820 out.go:177] * [newest-cni-535459] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 20:23:38.204302   79820 notify.go:220] Checking for updates...
	I1213 20:23:38.204304   79820 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 20:23:38.205592   79820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 20:23:38.206687   79820 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:23:38.207863   79820 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 20:23:38.208920   79820 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 20:23:38.209928   79820 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 20:23:38.211390   79820 config.go:182] Loaded profile config "newest-cni-535459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:23:38.211789   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:38.211857   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:38.227106   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36295
	I1213 20:23:38.227528   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:38.228121   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:23:38.228141   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:38.228624   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:38.228802   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:38.229038   79820 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 20:23:38.229314   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:38.229353   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:38.244124   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I1213 20:23:38.244541   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:38.245118   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:23:38.245150   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:38.245472   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:38.245656   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:38.280882   79820 out.go:177] * Using the kvm2 driver based on existing profile
	I1213 20:23:38.282056   79820 start.go:297] selected driver: kvm2
	I1213 20:23:38.282071   79820 start.go:901] validating driver "kvm2" against &{Name:newest-cni-535459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:newest-cni-535459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s S
cheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:23:38.282177   79820 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 20:23:38.282946   79820 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 20:23:38.283023   79820 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20090-12353/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 20:23:38.297713   79820 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1213 20:23:38.298132   79820 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 20:23:38.298167   79820 cni.go:84] Creating CNI manager for ""
	I1213 20:23:38.298222   79820 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:23:38.298272   79820 start.go:340] cluster config:
	{Name:newest-cni-535459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-535459 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:23:38.298394   79820 iso.go:125] acquiring lock: {Name:mkd84f6661a5214d8c2d3a40ad448351a88bfd1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 20:23:38.299870   79820 out.go:177] * Starting "newest-cni-535459" primary control-plane node in "newest-cni-535459" cluster
	I1213 20:23:38.300922   79820 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 20:23:38.300954   79820 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1213 20:23:38.300961   79820 cache.go:56] Caching tarball of preloaded images
	I1213 20:23:38.301027   79820 preload.go:172] Found /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 20:23:38.301037   79820 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1213 20:23:38.301139   79820 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/config.json ...
	I1213 20:23:38.301353   79820 start.go:360] acquireMachinesLock for newest-cni-535459: {Name:mkc278ae0927dbec7538ca4f7c13001e5f3abc49 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 20:23:38.301405   79820 start.go:364] duration metric: took 31.317µs to acquireMachinesLock for "newest-cni-535459"
	I1213 20:23:38.301424   79820 start.go:96] Skipping create...Using existing machine configuration
	I1213 20:23:38.301434   79820 fix.go:54] fixHost starting: 
	I1213 20:23:38.301810   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:38.301846   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:38.316577   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46443
	I1213 20:23:38.317005   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:38.317449   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:23:38.317467   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:38.317793   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:38.317965   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:38.318117   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetState
	I1213 20:23:38.319590   79820 fix.go:112] recreateIfNeeded on newest-cni-535459: state=Stopped err=<nil>
	I1213 20:23:38.319614   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	W1213 20:23:38.319782   79820 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 20:23:38.321580   79820 out.go:177] * Restarting existing kvm2 VM for "newest-cni-535459" ...
	I1213 20:23:38.105462   77223 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.795842823s)
	I1213 20:23:38.105518   77223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:23:38.120268   77223 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 20:23:38.129684   77223 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:23:38.141849   77223 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:23:38.141869   77223 kubeadm.go:157] found existing configuration files:
	
	I1213 20:23:38.141910   77223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:23:38.150679   77223 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:23:38.150731   77223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:23:38.159954   77223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:23:38.168900   77223 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:23:38.168957   77223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:23:38.178775   77223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:23:38.187799   77223 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:23:38.187850   77223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:23:38.197158   77223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:23:38.206667   77223 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:23:38.206722   77223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:23:38.216276   77223 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 20:23:38.370967   77223 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 20:23:39.027955   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:39.041250   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:39.041315   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:39.083287   78367 cri.go:89] found id: ""
	I1213 20:23:39.083314   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.083324   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:39.083331   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:39.083384   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:39.125760   78367 cri.go:89] found id: ""
	I1213 20:23:39.125787   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.125798   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:39.125805   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:39.125857   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:39.159459   78367 cri.go:89] found id: ""
	I1213 20:23:39.159487   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.159497   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:39.159504   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:39.159557   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:39.194175   78367 cri.go:89] found id: ""
	I1213 20:23:39.194204   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.194211   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:39.194217   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:39.194265   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:39.228851   78367 cri.go:89] found id: ""
	I1213 20:23:39.228879   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.228889   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:39.228897   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:39.228948   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:39.266408   78367 cri.go:89] found id: ""
	I1213 20:23:39.266441   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.266452   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:39.266460   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:39.266505   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:39.303917   78367 cri.go:89] found id: ""
	I1213 20:23:39.303946   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.303957   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:39.303965   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:39.304024   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:39.337643   78367 cri.go:89] found id: ""
	I1213 20:23:39.337670   78367 logs.go:282] 0 containers: []
	W1213 20:23:39.337680   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:39.337690   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:39.337707   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:39.394343   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:39.394375   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:39.411615   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:39.411645   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:39.484070   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:39.484095   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:39.484110   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:39.570207   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:39.570231   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:38.322621   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Start
	I1213 20:23:38.322783   79820 main.go:141] libmachine: (newest-cni-535459) starting domain...
	I1213 20:23:38.322806   79820 main.go:141] libmachine: (newest-cni-535459) ensuring networks are active...
	I1213 20:23:38.323533   79820 main.go:141] libmachine: (newest-cni-535459) Ensuring network default is active
	I1213 20:23:38.323827   79820 main.go:141] libmachine: (newest-cni-535459) Ensuring network mk-newest-cni-535459 is active
	I1213 20:23:38.324140   79820 main.go:141] libmachine: (newest-cni-535459) getting domain XML...
	I1213 20:23:38.324747   79820 main.go:141] libmachine: (newest-cni-535459) creating domain...
	I1213 20:23:39.564073   79820 main.go:141] libmachine: (newest-cni-535459) waiting for IP...
	I1213 20:23:39.565035   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:39.565551   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:39.565617   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:39.565533   79856 retry.go:31] will retry after 298.228952ms: waiting for domain to come up
	I1213 20:23:39.865149   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:39.865713   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:39.865742   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:39.865696   79856 retry.go:31] will retry after 251.6627ms: waiting for domain to come up
	I1213 20:23:40.119294   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:40.119854   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:40.119884   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:40.119834   79856 retry.go:31] will retry after 300.482126ms: waiting for domain to come up
	I1213 20:23:40.422534   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:40.423263   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:40.423290   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:40.423228   79856 retry.go:31] will retry after 512.35172ms: waiting for domain to come up
	I1213 20:23:40.936920   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:40.937508   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:40.937541   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:40.937492   79856 retry.go:31] will retry after 706.292926ms: waiting for domain to come up
	I1213 20:23:41.645625   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:41.646229   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:41.646365   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:41.646289   79856 retry.go:31] will retry after 925.304714ms: waiting for domain to come up
	I1213 20:23:42.572832   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:42.573505   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:42.573551   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:42.573492   79856 retry.go:31] will retry after 784.905312ms: waiting for domain to come up
	I1213 20:23:44.821257   77510 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.710060568s)
	I1213 20:23:44.821343   77510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:23:44.851774   77510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 20:23:44.867597   77510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:23:44.882988   77510 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:23:44.883012   77510 kubeadm.go:157] found existing configuration files:
	
	I1213 20:23:44.883061   77510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1213 20:23:44.897859   77510 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:23:44.897930   77510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:23:44.930490   77510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1213 20:23:44.940775   77510 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:23:44.940832   77510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:23:44.949814   77510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1213 20:23:44.958792   77510 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:23:44.958864   77510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:23:44.967799   77510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1213 20:23:44.976918   77510 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:23:44.976978   77510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:23:44.985827   77510 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 20:23:45.032679   77510 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1213 20:23:45.032823   77510 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 20:23:45.154457   77510 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 20:23:45.154613   77510 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 20:23:45.154753   77510 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 20:23:45.168560   77510 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 20:23:45.170392   77510 out.go:235]   - Generating certificates and keys ...
	I1213 20:23:45.170484   77510 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 20:23:45.170567   77510 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 20:23:45.170671   77510 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 20:23:45.170773   77510 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1213 20:23:45.170895   77510 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 20:23:45.175078   77510 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1213 20:23:45.175301   77510 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1213 20:23:45.175631   77510 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1213 20:23:45.175826   77510 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 20:23:45.176621   77510 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 20:23:45.176938   77510 kubeadm.go:310] [certs] Using the existing "sa" key
	I1213 20:23:45.177096   77510 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 20:23:45.425420   77510 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 20:23:45.744337   77510 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 20:23:46.051697   77510 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 20:23:46.134768   77510 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 20:23:46.244436   77510 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 20:23:46.245253   77510 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 20:23:46.248609   77510 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 20:23:46.425197   77223 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1213 20:23:46.425300   77223 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 20:23:46.425412   77223 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 20:23:46.425543   77223 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 20:23:46.425669   77223 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 20:23:46.425751   77223 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 20:23:46.427622   77223 out.go:235]   - Generating certificates and keys ...
	I1213 20:23:46.427725   77223 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 20:23:46.427829   77223 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 20:23:46.427918   77223 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 20:23:46.428011   77223 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1213 20:23:46.428119   77223 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 20:23:46.428197   77223 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1213 20:23:46.428286   77223 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1213 20:23:46.428363   77223 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1213 20:23:46.428447   77223 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 20:23:46.428558   77223 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 20:23:46.428626   77223 kubeadm.go:310] [certs] Using the existing "sa" key
	I1213 20:23:46.428704   77223 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 20:23:46.428791   77223 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 20:23:46.428896   77223 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 20:23:46.428988   77223 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 20:23:46.429081   77223 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 20:23:46.429176   77223 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 20:23:46.429297   77223 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 20:23:46.429377   77223 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 20:23:46.430801   77223 out.go:235]   - Booting up control plane ...
	I1213 20:23:46.430919   77223 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 20:23:46.431003   77223 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 20:23:46.431082   77223 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 20:23:46.431200   77223 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 20:23:46.431334   77223 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 20:23:46.431408   77223 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 20:23:46.431609   77223 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 20:23:46.431761   77223 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 20:23:46.431850   77223 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.304495ms
	I1213 20:23:46.432010   77223 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1213 20:23:46.432103   77223 kubeadm.go:310] [api-check] The API server is healthy after 5.002258285s
	I1213 20:23:46.432266   77223 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 20:23:46.432423   77223 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 20:23:46.432498   77223 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 20:23:46.432678   77223 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-475934 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 20:23:46.432749   77223 kubeadm.go:310] [bootstrap-token] Using token: ztynho.1kbaokhemrbxet6k
	I1213 20:23:46.434022   77223 out.go:235]   - Configuring RBAC rules ...
	I1213 20:23:46.434143   77223 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 20:23:46.434228   77223 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 20:23:46.434361   77223 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 20:23:46.434498   77223 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 20:23:46.434622   77223 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 20:23:46.434723   77223 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 20:23:46.434870   77223 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 20:23:46.434940   77223 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1213 20:23:46.435004   77223 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1213 20:23:46.435013   77223 kubeadm.go:310] 
	I1213 20:23:46.435096   77223 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1213 20:23:46.435109   77223 kubeadm.go:310] 
	I1213 20:23:46.435171   77223 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1213 20:23:46.435177   77223 kubeadm.go:310] 
	I1213 20:23:46.435197   77223 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1213 20:23:46.435248   77223 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 20:23:46.435294   77223 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 20:23:46.435300   77223 kubeadm.go:310] 
	I1213 20:23:46.435352   77223 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1213 20:23:46.435363   77223 kubeadm.go:310] 
	I1213 20:23:46.435402   77223 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 20:23:46.435408   77223 kubeadm.go:310] 
	I1213 20:23:46.435455   77223 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1213 20:23:46.435519   77223 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 20:23:46.435617   77223 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 20:23:46.435639   77223 kubeadm.go:310] 
	I1213 20:23:46.435750   77223 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 20:23:46.435854   77223 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1213 20:23:46.435869   77223 kubeadm.go:310] 
	I1213 20:23:46.435980   77223 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ztynho.1kbaokhemrbxet6k \
	I1213 20:23:46.436148   77223 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b927cc699f96ad11d9aa77520496913d5873f96a2e411ce1bcbe6def5a1747ad \
	I1213 20:23:46.436179   77223 kubeadm.go:310] 	--control-plane 
	I1213 20:23:46.436189   77223 kubeadm.go:310] 
	I1213 20:23:46.436310   77223 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1213 20:23:46.436321   77223 kubeadm.go:310] 
	I1213 20:23:46.436460   77223 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ztynho.1kbaokhemrbxet6k \
	I1213 20:23:46.436635   77223 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b927cc699f96ad11d9aa77520496913d5873f96a2e411ce1bcbe6def5a1747ad 
	I1213 20:23:46.436652   77223 cni.go:84] Creating CNI manager for ""
	I1213 20:23:46.436659   77223 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:23:46.438047   77223 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 20:23:42.109283   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:42.126005   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:42.126094   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:42.169463   78367 cri.go:89] found id: ""
	I1213 20:23:42.169494   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.169505   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:42.169512   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:42.169573   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:42.214207   78367 cri.go:89] found id: ""
	I1213 20:23:42.214237   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.214248   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:42.214265   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:42.214327   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:42.255998   78367 cri.go:89] found id: ""
	I1213 20:23:42.256030   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.256041   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:42.256049   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:42.256104   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:42.295578   78367 cri.go:89] found id: ""
	I1213 20:23:42.295607   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.295618   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:42.295625   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:42.295686   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:42.336462   78367 cri.go:89] found id: ""
	I1213 20:23:42.336489   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.336501   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:42.336509   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:42.336568   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:42.377959   78367 cri.go:89] found id: ""
	I1213 20:23:42.377987   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.377998   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:42.378020   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:42.378083   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:42.421761   78367 cri.go:89] found id: ""
	I1213 20:23:42.421790   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.421799   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:42.421807   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:42.421866   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:42.456346   78367 cri.go:89] found id: ""
	I1213 20:23:42.456373   78367 logs.go:282] 0 containers: []
	W1213 20:23:42.456387   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:42.456397   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:42.456411   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:42.472200   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:42.472241   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:42.544913   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:42.544938   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:42.544954   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:42.646820   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:42.646869   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:42.685374   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:42.685411   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:45.244342   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:45.257131   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:45.257210   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:45.291023   78367 cri.go:89] found id: ""
	I1213 20:23:45.291064   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.291072   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:45.291085   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:45.291145   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:45.322469   78367 cri.go:89] found id: ""
	I1213 20:23:45.322499   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.322509   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:45.322516   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:45.322574   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:45.364647   78367 cri.go:89] found id: ""
	I1213 20:23:45.364679   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.364690   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:45.364696   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:45.364754   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:45.406124   78367 cri.go:89] found id: ""
	I1213 20:23:45.406151   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.406161   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:45.406169   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:45.406229   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:45.449418   78367 cri.go:89] found id: ""
	I1213 20:23:45.449442   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.449450   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:45.449456   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:45.449513   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:45.491190   78367 cri.go:89] found id: ""
	I1213 20:23:45.491221   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.491231   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:45.491239   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:45.491312   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:45.537336   78367 cri.go:89] found id: ""
	I1213 20:23:45.537365   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.537375   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:45.537383   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:45.537442   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:45.574826   78367 cri.go:89] found id: ""
	I1213 20:23:45.574873   78367 logs.go:282] 0 containers: []
	W1213 20:23:45.574884   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:45.574897   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:45.574911   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:45.656859   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:45.656900   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:45.671183   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:45.671211   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:45.748645   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:45.748670   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:45.748684   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:45.861549   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:45.861598   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:43.360177   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:43.360711   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:43.360749   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:43.360702   79856 retry.go:31] will retry after 910.256009ms: waiting for domain to come up
	I1213 20:23:44.272014   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:44.272526   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:44.272555   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:44.272488   79856 retry.go:31] will retry after 1.534434138s: waiting for domain to come up
	I1213 20:23:45.809190   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:45.809761   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:45.809786   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:45.809755   79856 retry.go:31] will retry after 2.307546799s: waiting for domain to come up
	I1213 20:23:48.120134   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:48.120663   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:48.120688   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:48.120620   79856 retry.go:31] will retry after 2.815296829s: waiting for domain to come up
	I1213 20:23:46.250264   77510 out.go:235]   - Booting up control plane ...
	I1213 20:23:46.250387   77510 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 20:23:46.250522   77510 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 20:23:46.250655   77510 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 20:23:46.274127   77510 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 20:23:46.280501   77510 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 20:23:46.280570   77510 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 20:23:46.407152   77510 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 20:23:46.407342   77510 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 20:23:46.909234   77510 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.289561ms
	I1213 20:23:46.909341   77510 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1213 20:23:46.439167   77223 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 20:23:46.452642   77223 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 20:23:46.478384   77223 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 20:23:46.478435   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:46.478467   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-475934 minikube.k8s.io/updated_at=2024_12_13T20_23_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956 minikube.k8s.io/name=no-preload-475934 minikube.k8s.io/primary=true
	I1213 20:23:46.497425   77223 ops.go:34] apiserver oom_adj: -16
	I1213 20:23:46.697773   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:47.198632   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:47.697921   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:48.198923   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:48.697941   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:49.198682   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:49.698572   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:50.198476   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:50.698077   77223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:50.793538   77223 kubeadm.go:1113] duration metric: took 4.315156477s to wait for elevateKubeSystemPrivileges
	I1213 20:23:50.793579   77223 kubeadm.go:394] duration metric: took 5m1.991513079s to StartCluster
	I1213 20:23:50.793600   77223 settings.go:142] acquiring lock: {Name:mkc90da34b53323b31b6e69f8fab5ad7b1bdb254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:23:50.793686   77223 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:23:50.795098   77223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/kubeconfig: {Name:mkeeacf16d2513309766df13b67a96dd252bc4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:23:50.795375   77223 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.128 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 20:23:50.795446   77223 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 20:23:50.795546   77223 addons.go:69] Setting storage-provisioner=true in profile "no-preload-475934"
	I1213 20:23:50.795565   77223 addons.go:234] Setting addon storage-provisioner=true in "no-preload-475934"
	W1213 20:23:50.795574   77223 addons.go:243] addon storage-provisioner should already be in state true
	I1213 20:23:50.795605   77223 host.go:66] Checking if "no-preload-475934" exists ...
	I1213 20:23:50.795621   77223 config.go:182] Loaded profile config "no-preload-475934": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:23:50.795673   77223 addons.go:69] Setting default-storageclass=true in profile "no-preload-475934"
	I1213 20:23:50.795698   77223 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-475934"
	I1213 20:23:50.796066   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.796080   77223 addons.go:69] Setting dashboard=true in profile "no-preload-475934"
	I1213 20:23:50.796098   77223 addons.go:234] Setting addon dashboard=true in "no-preload-475934"
	I1213 20:23:50.796100   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	W1213 20:23:50.796105   77223 addons.go:243] addon dashboard should already be in state true
	I1213 20:23:50.796129   77223 host.go:66] Checking if "no-preload-475934" exists ...
	I1213 20:23:50.796167   77223 addons.go:69] Setting metrics-server=true in profile "no-preload-475934"
	I1213 20:23:50.796187   77223 addons.go:234] Setting addon metrics-server=true in "no-preload-475934"
	W1213 20:23:50.796195   77223 addons.go:243] addon metrics-server should already be in state true
	I1213 20:23:50.796223   77223 host.go:66] Checking if "no-preload-475934" exists ...
	I1213 20:23:50.796066   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.796371   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.796476   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.796502   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.796625   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.796665   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.802558   77223 out.go:177] * Verifying Kubernetes components...
	I1213 20:23:50.804240   77223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:23:50.815506   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40753
	I1213 20:23:50.815508   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43379
	I1213 20:23:50.815849   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I1213 20:23:50.816023   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.816131   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.816355   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.816463   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42859
	I1213 20:23:50.816587   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.816610   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.816711   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.816731   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.816857   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.816968   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.817049   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.817074   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.817091   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.817187   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetState
	I1213 20:23:50.817334   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.817353   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.817814   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.817854   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.818079   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.818094   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.818681   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.818685   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.818721   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.818756   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.839237   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36455
	I1213 20:23:50.855736   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.856284   77223 addons.go:234] Setting addon default-storageclass=true in "no-preload-475934"
	W1213 20:23:50.856308   77223 addons.go:243] addon default-storageclass should already be in state true
	I1213 20:23:50.856341   77223 host.go:66] Checking if "no-preload-475934" exists ...
	I1213 20:23:50.856381   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.856404   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.856715   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.856733   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.856757   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.857004   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetState
	I1213 20:23:50.859133   77223 main.go:141] libmachine: (no-preload-475934) Calling .DriverName
	I1213 20:23:50.861074   77223 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 20:23:50.862375   77223 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1213 20:23:50.863494   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 20:23:50.863514   77223 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 20:23:50.863535   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHHostname
	I1213 20:23:50.874249   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHPort
	I1213 20:23:50.874355   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.874381   77223 main.go:141] libmachine: (no-preload-475934) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a1:3e", ip: ""} in network mk-no-preload-475934: {Iface:virbr4 ExpiryTime:2024-12-13 21:18:22 +0000 UTC Type:0 Mac:52:54:00:b3:a1:3e Iaid: IPaddr:192.168.61.128 Prefix:24 Hostname:no-preload-475934 Clientid:01:52:54:00:b3:a1:3e}
	I1213 20:23:50.874406   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined IP address 192.168.61.128 and MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.874481   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHKeyPath
	I1213 20:23:50.874755   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHUsername
	I1213 20:23:50.875083   77223 sshutil.go:53] new ssh client: &{IP:192.168.61.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/no-preload-475934/id_rsa Username:docker}
	I1213 20:23:50.876889   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I1213 20:23:50.876927   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45279
	I1213 20:23:50.877256   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39049
	I1213 20:23:50.877531   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.877577   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.877899   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.878141   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.878154   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.878167   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.878170   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.878413   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.878435   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.878483   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.878527   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.878869   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetState
	I1213 20:23:50.878879   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.878893   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetState
	I1213 20:23:50.879461   77223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:50.879507   77223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:50.880758   77223 main.go:141] libmachine: (no-preload-475934) Calling .DriverName
	I1213 20:23:50.881011   77223 main.go:141] libmachine: (no-preload-475934) Calling .DriverName
	I1213 20:23:50.882329   77223 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:23:50.882392   77223 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 20:23:50.883529   77223 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:23:50.883551   77223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 20:23:50.883911   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHHostname
	I1213 20:23:50.884480   77223 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 20:23:50.884501   77223 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 20:23:50.884518   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHHostname
	I1213 20:23:50.888177   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.888302   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.888537   77223 main.go:141] libmachine: (no-preload-475934) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a1:3e", ip: ""} in network mk-no-preload-475934: {Iface:virbr4 ExpiryTime:2024-12-13 21:18:22 +0000 UTC Type:0 Mac:52:54:00:b3:a1:3e Iaid: IPaddr:192.168.61.128 Prefix:24 Hostname:no-preload-475934 Clientid:01:52:54:00:b3:a1:3e}
	I1213 20:23:50.888583   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined IP address 192.168.61.128 and MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.888850   77223 main.go:141] libmachine: (no-preload-475934) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a1:3e", ip: ""} in network mk-no-preload-475934: {Iface:virbr4 ExpiryTime:2024-12-13 21:18:22 +0000 UTC Type:0 Mac:52:54:00:b3:a1:3e Iaid: IPaddr:192.168.61.128 Prefix:24 Hostname:no-preload-475934 Clientid:01:52:54:00:b3:a1:3e}
	I1213 20:23:50.888867   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHPort
	I1213 20:23:50.888870   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined IP address 192.168.61.128 and MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.889051   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHPort
	I1213 20:23:50.889070   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHKeyPath
	I1213 20:23:50.889186   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHUsername
	I1213 20:23:50.889244   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHKeyPath
	I1213 20:23:50.889291   77223 sshutil.go:53] new ssh client: &{IP:192.168.61.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/no-preload-475934/id_rsa Username:docker}
	I1213 20:23:50.889578   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHUsername
	I1213 20:23:50.889741   77223 sshutil.go:53] new ssh client: &{IP:192.168.61.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/no-preload-475934/id_rsa Username:docker}
	I1213 20:23:50.900416   77223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I1213 20:23:50.904150   77223 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:50.904681   77223 main.go:141] libmachine: Using API Version  1
	I1213 20:23:50.904710   77223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:50.905101   77223 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:50.905353   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetState
	I1213 20:23:50.907076   77223 main.go:141] libmachine: (no-preload-475934) Calling .DriverName
	I1213 20:23:50.907309   77223 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 20:23:50.907327   77223 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 20:23:50.907346   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHHostname
	I1213 20:23:50.913266   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.913676   77223 main.go:141] libmachine: (no-preload-475934) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a1:3e", ip: ""} in network mk-no-preload-475934: {Iface:virbr4 ExpiryTime:2024-12-13 21:18:22 +0000 UTC Type:0 Mac:52:54:00:b3:a1:3e Iaid: IPaddr:192.168.61.128 Prefix:24 Hostname:no-preload-475934 Clientid:01:52:54:00:b3:a1:3e}
	I1213 20:23:50.913698   77223 main.go:141] libmachine: (no-preload-475934) DBG | domain no-preload-475934 has defined IP address 192.168.61.128 and MAC address 52:54:00:b3:a1:3e in network mk-no-preload-475934
	I1213 20:23:50.913923   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHPort
	I1213 20:23:50.914129   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHKeyPath
	I1213 20:23:50.914296   77223 main.go:141] libmachine: (no-preload-475934) Calling .GetSSHUsername
	I1213 20:23:50.914481   77223 sshutil.go:53] new ssh client: &{IP:192.168.61.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/no-preload-475934/id_rsa Username:docker}
	I1213 20:23:51.062632   77223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 20:23:51.080757   77223 node_ready.go:35] waiting up to 6m0s for node "no-preload-475934" to be "Ready" ...
	I1213 20:23:51.096457   77223 node_ready.go:49] node "no-preload-475934" has status "Ready":"True"
	I1213 20:23:51.096488   77223 node_ready.go:38] duration metric: took 15.695926ms for node "no-preload-475934" to be "Ready" ...
	I1213 20:23:51.096501   77223 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 20:23:51.101069   77223 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:51.153214   77223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 20:23:51.201828   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 20:23:51.201861   77223 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 20:23:51.257276   77223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:23:51.286719   77223 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 20:23:51.286743   77223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 20:23:48.414982   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:48.431396   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:48.431482   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:48.476067   78367 cri.go:89] found id: ""
	I1213 20:23:48.476112   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.476124   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:48.476131   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:48.476194   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:48.517216   78367 cri.go:89] found id: ""
	I1213 20:23:48.517258   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.517269   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:48.517277   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:48.517381   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:48.562993   78367 cri.go:89] found id: ""
	I1213 20:23:48.563092   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.563117   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:48.563135   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:48.563223   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:48.604109   78367 cri.go:89] found id: ""
	I1213 20:23:48.604202   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.604224   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:48.604250   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:48.604348   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:48.651185   78367 cri.go:89] found id: ""
	I1213 20:23:48.651219   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.651230   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:48.651238   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:48.651317   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:48.695266   78367 cri.go:89] found id: ""
	I1213 20:23:48.695305   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.695317   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:48.695325   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:48.695389   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:48.741459   78367 cri.go:89] found id: ""
	I1213 20:23:48.741495   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.741506   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:48.741513   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:48.741573   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:48.785599   78367 cri.go:89] found id: ""
	I1213 20:23:48.785684   78367 logs.go:282] 0 containers: []
	W1213 20:23:48.785701   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:48.785716   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:48.785744   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:48.845741   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:48.845777   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:48.862971   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:48.863013   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:48.934300   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:48.934328   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:48.934344   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:49.023110   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:49.023154   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:51.562149   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:51.580078   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:51.580154   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:51.624644   78367 cri.go:89] found id: ""
	I1213 20:23:51.624677   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.624688   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:51.624696   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:51.624756   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:51.910904   77510 kubeadm.go:310] [api-check] The API server is healthy after 5.001533218s
	I1213 20:23:51.928221   77510 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 20:23:51.955180   77510 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 20:23:51.988925   77510 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 20:23:51.989201   77510 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-355668 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 20:23:52.006352   77510 kubeadm.go:310] [bootstrap-token] Using token: 62dvzj.gok594hxuxcynd4x
	I1213 20:23:50.939565   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:50.940051   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:50.940081   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:50.940008   79856 retry.go:31] will retry after 2.96641877s: waiting for domain to come up
	I1213 20:23:51.311455   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 20:23:51.311485   77223 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 20:23:51.369375   77223 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 20:23:51.369403   77223 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 20:23:51.424081   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 20:23:51.424111   77223 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 20:23:51.425876   77223 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 20:23:51.425896   77223 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 20:23:51.467889   77223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 20:23:51.513308   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 20:23:51.513340   77223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 20:23:51.601978   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 20:23:51.602009   77223 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 20:23:51.627122   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:51.627201   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:51.627580   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:51.629153   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:51.629172   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:51.629183   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:51.629191   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:51.629445   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:51.629463   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:51.629473   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:51.641253   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:51.641282   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:51.641576   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:51.641592   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:51.641593   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:51.656503   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 20:23:51.656529   77223 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 20:23:51.736524   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 20:23:51.736554   77223 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 20:23:51.766699   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 20:23:51.766786   77223 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 20:23:51.801572   77223 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 20:23:51.801601   77223 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 20:23:51.819179   77223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 20:23:52.110163   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:52.110190   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:52.110480   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:52.110500   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:52.110507   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:52.110514   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:52.110508   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:52.113643   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:52.113667   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:52.113674   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:52.551336   77223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.08338913s)
	I1213 20:23:52.551397   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:52.551410   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:52.551700   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:52.551721   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:52.551731   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:52.551739   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:52.551951   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:52.552000   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:52.552008   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:52.552025   77223 addons.go:475] Verifying addon metrics-server=true in "no-preload-475934"
	I1213 20:23:53.145015   77223 pod_ready.go:103] pod "etcd-no-preload-475934" in "kube-system" namespace has status "Ready":"False"
	I1213 20:23:53.262929   77223 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.44371085s)
	I1213 20:23:53.262987   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:53.263007   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:53.263335   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:53.263355   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:53.263365   77223 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:53.263373   77223 main.go:141] libmachine: (no-preload-475934) Calling .Close
	I1213 20:23:53.263380   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:53.263640   77223 main.go:141] libmachine: (no-preload-475934) DBG | Closing plugin on server side
	I1213 20:23:53.263680   77223 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:53.263688   77223 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:53.265176   77223 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-475934 addons enable metrics-server
	
	I1213 20:23:53.266358   77223 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1213 20:23:52.007746   77510 out.go:235]   - Configuring RBAC rules ...
	I1213 20:23:52.007914   77510 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 20:23:52.022398   77510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 20:23:52.033846   77510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 20:23:52.038811   77510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 20:23:52.052112   77510 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 20:23:52.068899   77510 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 20:23:52.319919   77510 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 20:23:52.804645   77510 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1213 20:23:53.320002   77510 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1213 20:23:53.321529   77510 kubeadm.go:310] 
	I1213 20:23:53.321648   77510 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1213 20:23:53.321684   77510 kubeadm.go:310] 
	I1213 20:23:53.321797   77510 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1213 20:23:53.321809   77510 kubeadm.go:310] 
	I1213 20:23:53.321843   77510 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1213 20:23:53.321931   77510 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 20:23:53.322014   77510 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 20:23:53.322039   77510 kubeadm.go:310] 
	I1213 20:23:53.322140   77510 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1213 20:23:53.322154   77510 kubeadm.go:310] 
	I1213 20:23:53.322237   77510 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 20:23:53.322253   77510 kubeadm.go:310] 
	I1213 20:23:53.322327   77510 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1213 20:23:53.322439   77510 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 20:23:53.322505   77510 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 20:23:53.322511   77510 kubeadm.go:310] 
	I1213 20:23:53.322642   77510 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 20:23:53.322757   77510 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1213 20:23:53.322771   77510 kubeadm.go:310] 
	I1213 20:23:53.322937   77510 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 62dvzj.gok594hxuxcynd4x \
	I1213 20:23:53.323079   77510 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b927cc699f96ad11d9aa77520496913d5873f96a2e411ce1bcbe6def5a1747ad \
	I1213 20:23:53.323132   77510 kubeadm.go:310] 	--control-plane 
	I1213 20:23:53.323149   77510 kubeadm.go:310] 
	I1213 20:23:53.323269   77510 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1213 20:23:53.323280   77510 kubeadm.go:310] 
	I1213 20:23:53.323407   77510 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 62dvzj.gok594hxuxcynd4x \
	I1213 20:23:53.323556   77510 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b927cc699f96ad11d9aa77520496913d5873f96a2e411ce1bcbe6def5a1747ad 
	I1213 20:23:53.324551   77510 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 20:23:53.324579   77510 cni.go:84] Creating CNI manager for ""
	I1213 20:23:53.324591   77510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:23:53.326071   77510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 20:23:53.327260   77510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 20:23:53.338245   77510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 20:23:53.359781   77510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 20:23:53.359954   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:53.360067   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-355668 minikube.k8s.io/updated_at=2024_12_13T20_23_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956 minikube.k8s.io/name=default-k8s-diff-port-355668 minikube.k8s.io/primary=true
	I1213 20:23:53.378620   77510 ops.go:34] apiserver oom_adj: -16
	I1213 20:23:53.595107   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:54.095889   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:54.596033   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:53.267500   77223 addons.go:510] duration metric: took 2.472063966s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1213 20:23:55.608441   77223 pod_ready.go:103] pod "etcd-no-preload-475934" in "kube-system" namespace has status "Ready":"False"
	I1213 20:23:51.673392   78367 cri.go:89] found id: ""
	I1213 20:23:51.673421   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.673432   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:51.673440   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:51.673501   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:51.721445   78367 cri.go:89] found id: ""
	I1213 20:23:51.721472   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.721480   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:51.721488   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:51.721544   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:51.755079   78367 cri.go:89] found id: ""
	I1213 20:23:51.755112   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.755123   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:51.755131   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:51.755194   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:51.796420   78367 cri.go:89] found id: ""
	I1213 20:23:51.796457   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.796470   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:51.796478   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:51.796542   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:51.830054   78367 cri.go:89] found id: ""
	I1213 20:23:51.830080   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.830090   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:51.830098   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:51.830153   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:51.867546   78367 cri.go:89] found id: ""
	I1213 20:23:51.867574   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.867584   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:51.867592   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:51.867653   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:51.911804   78367 cri.go:89] found id: ""
	I1213 20:23:51.911830   78367 logs.go:282] 0 containers: []
	W1213 20:23:51.911841   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:51.911853   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:51.911867   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:51.981311   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:51.981340   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:51.997948   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:51.997995   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:52.078493   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:52.078526   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:52.078541   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:52.181165   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:52.181213   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:54.728341   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:54.742062   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:54.742122   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:54.779920   78367 cri.go:89] found id: ""
	I1213 20:23:54.779947   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.779958   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:54.779966   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:54.780021   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:54.813600   78367 cri.go:89] found id: ""
	I1213 20:23:54.813631   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.813641   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:54.813649   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:54.813711   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:54.846731   78367 cri.go:89] found id: ""
	I1213 20:23:54.846761   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.846771   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:54.846778   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:54.846837   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:54.878598   78367 cri.go:89] found id: ""
	I1213 20:23:54.878628   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.878638   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:54.878646   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:54.878706   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:54.914259   78367 cri.go:89] found id: ""
	I1213 20:23:54.914293   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.914304   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:54.914318   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:54.914383   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:54.947232   78367 cri.go:89] found id: ""
	I1213 20:23:54.947264   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.947275   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:54.947283   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:54.947350   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:54.992079   78367 cri.go:89] found id: ""
	I1213 20:23:54.992108   78367 logs.go:282] 0 containers: []
	W1213 20:23:54.992118   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:54.992125   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:54.992184   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:55.035067   78367 cri.go:89] found id: ""
	I1213 20:23:55.035093   78367 logs.go:282] 0 containers: []
	W1213 20:23:55.035100   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:55.035109   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:55.035122   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:55.108198   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:55.108224   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:55.108238   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:55.197303   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:55.197333   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:23:55.248131   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:55.248154   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:55.301605   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:55.301635   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:53.907724   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:53.908424   79820 main.go:141] libmachine: (newest-cni-535459) DBG | unable to find current IP address of domain newest-cni-535459 in network mk-newest-cni-535459
	I1213 20:23:53.908470   79820 main.go:141] libmachine: (newest-cni-535459) DBG | I1213 20:23:53.908391   79856 retry.go:31] will retry after 4.35778362s: waiting for domain to come up
	I1213 20:23:55.095857   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:55.595908   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:56.095409   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:56.595238   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:57.095945   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:57.595757   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:58.095963   77510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 20:23:58.198049   77510 kubeadm.go:1113] duration metric: took 4.838144553s to wait for elevateKubeSystemPrivileges
	I1213 20:23:58.198082   77510 kubeadm.go:394] duration metric: took 5m1.770847274s to StartCluster
	I1213 20:23:58.198102   77510 settings.go:142] acquiring lock: {Name:mkc90da34b53323b31b6e69f8fab5ad7b1bdb254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:23:58.198176   77510 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:23:58.199549   77510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/kubeconfig: {Name:mkeeacf16d2513309766df13b67a96dd252bc4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:23:58.199800   77510 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.233 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 20:23:58.199963   77510 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 20:23:58.200086   77510 config.go:182] Loaded profile config "default-k8s-diff-port-355668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:23:58.200131   77510 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-355668"
	I1213 20:23:58.200150   77510 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-355668"
	W1213 20:23:58.200166   77510 addons.go:243] addon storage-provisioner should already be in state true
	I1213 20:23:58.200189   77510 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-355668"
	I1213 20:23:58.200199   77510 host.go:66] Checking if "default-k8s-diff-port-355668" exists ...
	I1213 20:23:58.200211   77510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-355668"
	I1213 20:23:58.200610   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.200626   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.200639   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.200656   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.200712   77510 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-355668"
	I1213 20:23:58.200712   77510 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-355668"
	I1213 20:23:58.200725   77510 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-355668"
	W1213 20:23:58.200732   77510 addons.go:243] addon dashboard should already be in state true
	I1213 20:23:58.200733   77510 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-355668"
	W1213 20:23:58.200742   77510 addons.go:243] addon metrics-server should already be in state true
	I1213 20:23:58.200754   77510 host.go:66] Checking if "default-k8s-diff-port-355668" exists ...
	I1213 20:23:58.200771   77510 host.go:66] Checking if "default-k8s-diff-port-355668" exists ...
	I1213 20:23:58.205916   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.205937   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.205960   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.205976   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.206755   77510 out.go:177] * Verifying Kubernetes components...
	I1213 20:23:58.208075   77510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:23:58.223074   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35975
	I1213 20:23:58.223694   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.224155   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.224170   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.224674   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.224863   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetState
	I1213 20:23:58.226583   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45263
	I1213 20:23:58.227150   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.227693   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.227712   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.228163   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.228437   77510 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-355668"
	W1213 20:23:58.228457   77510 addons.go:243] addon default-storageclass should already be in state true
	I1213 20:23:58.228483   77510 host.go:66] Checking if "default-k8s-diff-port-355668" exists ...
	I1213 20:23:58.228838   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.228847   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.228871   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.228882   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.238833   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35317
	I1213 20:23:58.245605   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44253
	I1213 20:23:58.246100   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.246630   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.246648   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.247050   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.247623   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.247662   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.249751   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46067
	I1213 20:23:58.250222   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.250772   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.250789   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.254939   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I1213 20:23:58.254977   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.254944   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.255395   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetState
	I1213 20:23:58.255455   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.255928   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.255944   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.256275   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.256811   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.256843   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.258976   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .DriverName
	I1213 20:23:58.259498   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.259515   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.260075   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.260720   77510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:23:58.260752   77510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:23:58.261030   77510 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 20:23:58.262210   77510 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 20:23:58.262229   77510 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 20:23:58.262248   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHHostname
	I1213 20:23:58.265414   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.266021   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:ab:46", ip: ""} in network mk-default-k8s-diff-port-355668: {Iface:virbr1 ExpiryTime:2024-12-13 21:18:42 +0000 UTC Type:0 Mac:52:54:00:22:ab:46 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:default-k8s-diff-port-355668 Clientid:01:52:54:00:22:ab:46}
	I1213 20:23:58.266045   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined IP address 192.168.39.233 and MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.266278   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHPort
	I1213 20:23:58.266441   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHKeyPath
	I1213 20:23:58.266627   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHUsername
	I1213 20:23:58.266776   77510 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/default-k8s-diff-port-355668/id_rsa Username:docker}
	I1213 20:23:58.268367   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32839
	I1213 20:23:58.269174   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.270087   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.270108   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.270905   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.271343   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetState
	I1213 20:23:58.278504   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41687
	I1213 20:23:58.279047   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.279669   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.279685   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.280236   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.280583   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetState
	I1213 20:23:58.281949   77510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43145
	I1213 20:23:58.282310   77510 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:23:58.283003   77510 main.go:141] libmachine: Using API Version  1
	I1213 20:23:58.283020   77510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:23:58.283408   77510 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:23:58.286964   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .DriverName
	I1213 20:23:58.286998   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .DriverName
	I1213 20:23:58.287032   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetState
	I1213 20:23:58.287233   77510 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 20:23:58.287250   77510 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 20:23:58.287276   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHHostname
	I1213 20:23:58.288987   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .DriverName
	I1213 20:23:58.289809   77510 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 20:23:58.290685   77510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:23:58.292753   77510 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:23:58.292774   77510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 20:23:58.292792   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHHostname
	I1213 20:23:58.292849   77510 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1213 20:23:56.611155   77223 pod_ready.go:93] pod "etcd-no-preload-475934" in "kube-system" namespace has status "Ready":"True"
	I1213 20:23:56.611190   77223 pod_ready.go:82] duration metric: took 5.510087654s for pod "etcd-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:56.611203   77223 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:57.116912   77223 pod_ready.go:93] pod "kube-apiserver-no-preload-475934" in "kube-system" namespace has status "Ready":"True"
	I1213 20:23:57.116945   77223 pod_ready.go:82] duration metric: took 505.733979ms for pod "kube-apiserver-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:57.116958   77223 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:57.121384   77223 pod_ready.go:93] pod "kube-controller-manager-no-preload-475934" in "kube-system" namespace has status "Ready":"True"
	I1213 20:23:57.121411   77223 pod_ready.go:82] duration metric: took 4.445498ms for pod "kube-controller-manager-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:57.121425   77223 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:59.129454   77223 pod_ready.go:103] pod "kube-scheduler-no-preload-475934" in "kube-system" namespace has status "Ready":"False"
	I1213 20:23:59.662780   77223 pod_ready.go:93] pod "kube-scheduler-no-preload-475934" in "kube-system" namespace has status "Ready":"True"
	I1213 20:23:59.662813   77223 pod_ready.go:82] duration metric: took 2.541378671s for pod "kube-scheduler-no-preload-475934" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:59.662828   77223 pod_ready.go:39] duration metric: took 8.566311765s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 20:23:59.662869   77223 api_server.go:52] waiting for apiserver process to appear ...
	I1213 20:23:59.662936   77223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:59.685691   77223 api_server.go:72] duration metric: took 8.890275631s to wait for apiserver process to appear ...
	I1213 20:23:59.685722   77223 api_server.go:88] waiting for apiserver healthz status ...
	I1213 20:23:59.685743   77223 api_server.go:253] Checking apiserver healthz at https://192.168.61.128:8443/healthz ...
	I1213 20:23:59.692539   77223 api_server.go:279] https://192.168.61.128:8443/healthz returned 200:
	ok
	I1213 20:23:59.694289   77223 api_server.go:141] control plane version: v1.31.2
	I1213 20:23:59.694317   77223 api_server.go:131] duration metric: took 8.58708ms to wait for apiserver health ...
	I1213 20:23:59.694327   77223 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 20:23:59.703648   77223 system_pods.go:59] 9 kube-system pods found
	I1213 20:23:59.703682   77223 system_pods.go:61] "coredns-7c65d6cfc9-gksk2" [2099250f-c8ad-4c8d-b5da-9468b16e90de] Running
	I1213 20:23:59.703691   77223 system_pods.go:61] "coredns-7c65d6cfc9-gl527" [974ba38b-6931-4e46-aece-5b72bffab803] Running
	I1213 20:23:59.703697   77223 system_pods.go:61] "etcd-no-preload-475934" [725feb76-9ad0-4640-ba25-2eae13596bba] Running
	I1213 20:23:59.703703   77223 system_pods.go:61] "kube-apiserver-no-preload-475934" [56776240-3677-4af6-bba4-dd1a261d5560] Running
	I1213 20:23:59.703711   77223 system_pods.go:61] "kube-controller-manager-no-preload-475934" [86f1bb7e-ee5d-441d-a38a-1a0f74fec6e4] Running
	I1213 20:23:59.703716   77223 system_pods.go:61] "kube-proxy-s5k7k" [db2eddc8-a260-42e5-8590-3475eb56a54b] Running
	I1213 20:23:59.703721   77223 system_pods.go:61] "kube-scheduler-no-preload-475934" [5e10b82e-e677-4f7d-bbd5-6e494b0796af] Running
	I1213 20:23:59.703732   77223 system_pods.go:61] "metrics-server-6867b74b74-l2mch" [b7c19469-9a0d-4136-beed-c2c309e610cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 20:23:59.703742   77223 system_pods.go:61] "storage-provisioner" [1bfd0b04-9a54-4a03-8e93-ffe4566108a1] Running
	I1213 20:23:59.703752   77223 system_pods.go:74] duration metric: took 9.418447ms to wait for pod list to return data ...
	I1213 20:23:59.703761   77223 default_sa.go:34] waiting for default service account to be created ...
	I1213 20:23:59.713584   77223 default_sa.go:45] found service account: "default"
	I1213 20:23:59.713610   77223 default_sa.go:55] duration metric: took 9.841478ms for default service account to be created ...
	I1213 20:23:59.713621   77223 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 20:23:59.720207   77223 system_pods.go:86] 9 kube-system pods found
	I1213 20:23:59.720230   77223 system_pods.go:89] "coredns-7c65d6cfc9-gksk2" [2099250f-c8ad-4c8d-b5da-9468b16e90de] Running
	I1213 20:23:59.720236   77223 system_pods.go:89] "coredns-7c65d6cfc9-gl527" [974ba38b-6931-4e46-aece-5b72bffab803] Running
	I1213 20:23:59.720240   77223 system_pods.go:89] "etcd-no-preload-475934" [725feb76-9ad0-4640-ba25-2eae13596bba] Running
	I1213 20:23:59.720244   77223 system_pods.go:89] "kube-apiserver-no-preload-475934" [56776240-3677-4af6-bba4-dd1a261d5560] Running
	I1213 20:23:59.720247   77223 system_pods.go:89] "kube-controller-manager-no-preload-475934" [86f1bb7e-ee5d-441d-a38a-1a0f74fec6e4] Running
	I1213 20:23:59.720251   77223 system_pods.go:89] "kube-proxy-s5k7k" [db2eddc8-a260-42e5-8590-3475eb56a54b] Running
	I1213 20:23:59.720255   77223 system_pods.go:89] "kube-scheduler-no-preload-475934" [5e10b82e-e677-4f7d-bbd5-6e494b0796af] Running
	I1213 20:23:59.720268   77223 system_pods.go:89] "metrics-server-6867b74b74-l2mch" [b7c19469-9a0d-4136-beed-c2c309e610cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 20:23:59.720272   77223 system_pods.go:89] "storage-provisioner" [1bfd0b04-9a54-4a03-8e93-ffe4566108a1] Running
	I1213 20:23:59.720279   77223 system_pods.go:126] duration metric: took 6.653114ms to wait for k8s-apps to be running ...
	I1213 20:23:59.720288   77223 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 20:23:59.720325   77223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:23:59.743000   77223 system_svc.go:56] duration metric: took 22.70094ms WaitForService to wait for kubelet
	I1213 20:23:59.743035   77223 kubeadm.go:582] duration metric: took 8.947624109s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 20:23:59.743057   77223 node_conditions.go:102] verifying NodePressure condition ...
	I1213 20:23:59.747281   77223 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 20:23:59.747321   77223 node_conditions.go:123] node cpu capacity is 2
	I1213 20:23:59.747337   77223 node_conditions.go:105] duration metric: took 4.273745ms to run NodePressure ...
	I1213 20:23:59.747353   77223 start.go:241] waiting for startup goroutines ...
	I1213 20:23:59.747363   77223 start.go:246] waiting for cluster config update ...
	I1213 20:23:59.747380   77223 start.go:255] writing updated cluster config ...
	I1213 20:23:59.747732   77223 ssh_runner.go:195] Run: rm -f paused
	I1213 20:23:59.820239   77223 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1213 20:23:59.821954   77223 out.go:177] * Done! kubectl is now configured to use "no-preload-475934" cluster and "default" namespace by default
	I1213 20:23:58.293751   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.294127   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 20:23:58.294142   77510 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 20:23:58.294178   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHHostname
	I1213 20:23:58.294280   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:ab:46", ip: ""} in network mk-default-k8s-diff-port-355668: {Iface:virbr1 ExpiryTime:2024-12-13 21:18:42 +0000 UTC Type:0 Mac:52:54:00:22:ab:46 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:default-k8s-diff-port-355668 Clientid:01:52:54:00:22:ab:46}
	I1213 20:23:58.294376   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined IP address 192.168.39.233 and MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.294629   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHPort
	I1213 20:23:58.294779   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHKeyPath
	I1213 20:23:58.294932   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHUsername
	I1213 20:23:58.295104   77510 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/default-k8s-diff-port-355668/id_rsa Username:docker}
	I1213 20:23:58.296706   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.297082   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:ab:46", ip: ""} in network mk-default-k8s-diff-port-355668: {Iface:virbr1 ExpiryTime:2024-12-13 21:18:42 +0000 UTC Type:0 Mac:52:54:00:22:ab:46 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:default-k8s-diff-port-355668 Clientid:01:52:54:00:22:ab:46}
	I1213 20:23:58.297117   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined IP address 192.168.39.233 and MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.297252   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHPort
	I1213 20:23:58.297422   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHKeyPath
	I1213 20:23:58.297574   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHUsername
	I1213 20:23:58.297699   77510 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/default-k8s-diff-port-355668/id_rsa Username:docker}
	I1213 20:23:58.298144   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.298502   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:ab:46", ip: ""} in network mk-default-k8s-diff-port-355668: {Iface:virbr1 ExpiryTime:2024-12-13 21:18:42 +0000 UTC Type:0 Mac:52:54:00:22:ab:46 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:default-k8s-diff-port-355668 Clientid:01:52:54:00:22:ab:46}
	I1213 20:23:58.298608   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | domain default-k8s-diff-port-355668 has defined IP address 192.168.39.233 and MAC address 52:54:00:22:ab:46 in network mk-default-k8s-diff-port-355668
	I1213 20:23:58.298673   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHPort
	I1213 20:23:58.298828   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHKeyPath
	I1213 20:23:58.299124   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .GetSSHUsername
	I1213 20:23:58.299253   77510 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/default-k8s-diff-port-355668/id_rsa Username:docker}
	I1213 20:23:58.437780   77510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 20:23:58.458240   77510 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-355668" to be "Ready" ...
	I1213 20:23:58.495039   77510 node_ready.go:49] node "default-k8s-diff-port-355668" has status "Ready":"True"
	I1213 20:23:58.495124   77510 node_ready.go:38] duration metric: took 36.851728ms for node "default-k8s-diff-port-355668" to be "Ready" ...
	I1213 20:23:58.495141   77510 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 20:23:58.506404   77510 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kl689" in "kube-system" namespace to be "Ready" ...
	I1213 20:23:58.548351   77510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 20:23:58.548377   77510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 20:23:58.570739   77510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 20:23:58.570762   77510 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 20:23:58.591010   77510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:23:58.598380   77510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 20:23:58.598406   77510 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 20:23:58.612228   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 20:23:58.612255   77510 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 20:23:58.616620   77510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 20:23:58.643759   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 20:23:58.643785   77510 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 20:23:58.657745   77510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 20:23:58.696453   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 20:23:58.696548   77510 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 20:23:58.760682   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 20:23:58.760710   77510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 20:23:58.851490   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 20:23:58.851514   77510 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 20:23:58.930302   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 20:23:58.930330   77510 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 20:23:58.991218   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 20:23:58.991261   77510 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 20:23:59.066139   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 20:23:59.066169   77510 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 20:23:59.102453   77510 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 20:23:59.102479   77510 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 20:23:59.182801   77510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 20:23:59.970886   77510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.379839482s)
	I1213 20:23:59.970942   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:59.970957   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:23:59.971058   77510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.354409285s)
	I1213 20:23:59.971081   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:59.971091   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:23:59.971200   77510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.313427588s)
	I1213 20:23:59.971217   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:59.971227   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:23:59.971296   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | Closing plugin on server side
	I1213 20:23:59.971333   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:59.971340   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:59.971348   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:59.971355   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:23:59.971564   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:59.971577   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:59.971587   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:59.971594   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:23:59.971800   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | Closing plugin on server side
	I1213 20:23:59.971830   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | Closing plugin on server side
	I1213 20:23:59.971836   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:59.971848   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:59.971861   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:59.971860   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:59.971873   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:23:59.971883   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:23:59.974115   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | Closing plugin on server side
	I1213 20:23:59.974153   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:59.974161   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:23:59.974168   77510 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-355668"
	I1213 20:23:59.974222   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | Closing plugin on server side
	I1213 20:23:59.974245   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:23:59.974255   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:00.001667   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:00.001698   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:24:00.002135   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:00.002164   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:00.002136   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) DBG | Closing plugin on server side
	I1213 20:24:00.532171   77510 pod_ready.go:103] pod "coredns-7c65d6cfc9-kl689" in "kube-system" namespace has status "Ready":"False"
	I1213 20:24:01.475325   77510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.292470675s)
	I1213 20:24:01.475377   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:01.475399   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:24:01.475719   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:01.475733   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:01.475742   77510 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:01.475750   77510 main.go:141] libmachine: (default-k8s-diff-port-355668) Calling .Close
	I1213 20:24:01.475977   77510 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:01.475990   77510 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:01.478505   77510 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-355668 addons enable metrics-server
	
	I1213 20:24:01.479872   77510 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1213 20:23:58.270264   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.270365   79820 main.go:141] libmachine: (newest-cni-535459) found domain IP: 192.168.50.11
	I1213 20:23:58.270394   79820 main.go:141] libmachine: (newest-cni-535459) reserving static IP address...
	I1213 20:23:58.270420   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has current primary IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.271183   79820 main.go:141] libmachine: (newest-cni-535459) reserved static IP address 192.168.50.11 for domain newest-cni-535459
	I1213 20:23:58.271227   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "newest-cni-535459", mac: "52:54:00:7d:17:89", ip: "192.168.50.11"} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.271247   79820 main.go:141] libmachine: (newest-cni-535459) waiting for SSH...
	I1213 20:23:58.271278   79820 main.go:141] libmachine: (newest-cni-535459) DBG | skip adding static IP to network mk-newest-cni-535459 - found existing host DHCP lease matching {name: "newest-cni-535459", mac: "52:54:00:7d:17:89", ip: "192.168.50.11"}
	I1213 20:23:58.271286   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Getting to WaitForSSH function...
	I1213 20:23:58.277440   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.283137   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.283166   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.283641   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Using SSH client type: external
	I1213 20:23:58.283664   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Using SSH private key: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa (-rw-------)
	I1213 20:23:58.283702   79820 main.go:141] libmachine: (newest-cni-535459) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 20:23:58.283712   79820 main.go:141] libmachine: (newest-cni-535459) DBG | About to run SSH command:
	I1213 20:23:58.283724   79820 main.go:141] libmachine: (newest-cni-535459) DBG | exit 0
	I1213 20:23:58.431895   79820 main.go:141] libmachine: (newest-cni-535459) DBG | SSH cmd err, output: <nil>: 
	I1213 20:23:58.432276   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetConfigRaw
	I1213 20:23:58.433028   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetIP
	I1213 20:23:58.436521   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.436848   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.436875   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.437192   79820 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/config.json ...
	I1213 20:23:58.437455   79820 machine.go:93] provisionDockerMachine start ...
	I1213 20:23:58.437480   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:58.437689   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:58.440580   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.441089   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.441132   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.441277   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:58.441491   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:58.441620   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:58.441769   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:58.441918   79820 main.go:141] libmachine: Using SSH client type: native
	I1213 20:23:58.442164   79820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I1213 20:23:58.442183   79820 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 20:23:58.559163   79820 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 20:23:58.559200   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetMachineName
	I1213 20:23:58.559468   79820 buildroot.go:166] provisioning hostname "newest-cni-535459"
	I1213 20:23:58.559498   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetMachineName
	I1213 20:23:58.559678   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:58.562818   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.563374   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.563402   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.563582   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:58.563766   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:58.563919   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:58.564082   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:58.564268   79820 main.go:141] libmachine: Using SSH client type: native
	I1213 20:23:58.564508   79820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I1213 20:23:58.564530   79820 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-535459 && echo "newest-cni-535459" | sudo tee /etc/hostname
	I1213 20:23:58.696712   79820 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-535459
	
	I1213 20:23:58.696798   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:58.700359   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.700838   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.700864   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.701015   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:58.701205   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:58.701411   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:58.701579   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:58.701764   79820 main.go:141] libmachine: Using SSH client type: native
	I1213 20:23:58.702008   79820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I1213 20:23:58.702036   79820 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-535459' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-535459/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-535459' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 20:23:58.827902   79820 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 20:23:58.827937   79820 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20090-12353/.minikube CaCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20090-12353/.minikube}
	I1213 20:23:58.827979   79820 buildroot.go:174] setting up certificates
	I1213 20:23:58.827999   79820 provision.go:84] configureAuth start
	I1213 20:23:58.828016   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetMachineName
	I1213 20:23:58.828306   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetIP
	I1213 20:23:58.831180   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.831550   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.831588   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.831736   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:58.833951   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.834312   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:58.834355   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:58.834505   79820 provision.go:143] copyHostCerts
	I1213 20:23:58.834581   79820 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem, removing ...
	I1213 20:23:58.834598   79820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem
	I1213 20:23:58.834689   79820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/ca.pem (1082 bytes)
	I1213 20:23:58.834879   79820 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem, removing ...
	I1213 20:23:58.834898   79820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem
	I1213 20:23:58.834948   79820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/cert.pem (1123 bytes)
	I1213 20:23:58.835048   79820 exec_runner.go:144] found /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem, removing ...
	I1213 20:23:58.835067   79820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem
	I1213 20:23:58.835107   79820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20090-12353/.minikube/key.pem (1675 bytes)
	I1213 20:23:58.835195   79820 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem org=jenkins.newest-cni-535459 san=[127.0.0.1 192.168.50.11 localhost minikube newest-cni-535459]
	I1213 20:23:59.091370   79820 provision.go:177] copyRemoteCerts
	I1213 20:23:59.091432   79820 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 20:23:59.091482   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:59.094717   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.095146   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.095177   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.095370   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:59.095547   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.095707   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:59.095832   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:23:59.177442   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 20:23:59.202054   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 20:23:59.228527   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 20:23:59.254148   79820 provision.go:87] duration metric: took 426.134893ms to configureAuth
	I1213 20:23:59.254187   79820 buildroot.go:189] setting minikube options for container-runtime
	I1213 20:23:59.254402   79820 config.go:182] Loaded profile config "newest-cni-535459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:23:59.254467   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:59.257684   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.258113   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.258139   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.258369   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:59.258575   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.258743   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.258913   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:59.259101   79820 main.go:141] libmachine: Using SSH client type: native
	I1213 20:23:59.259355   79820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I1213 20:23:59.259378   79820 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 20:23:59.495940   79820 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 20:23:59.495974   79820 machine.go:96] duration metric: took 1.058500785s to provisionDockerMachine
	I1213 20:23:59.495990   79820 start.go:293] postStartSetup for "newest-cni-535459" (driver="kvm2")
	I1213 20:23:59.496006   79820 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 20:23:59.496029   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:59.496330   79820 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 20:23:59.496359   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:59.499780   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.500193   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.500234   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.500450   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:59.500642   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.500813   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:59.500918   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:23:59.582993   79820 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 20:23:59.588260   79820 info.go:137] Remote host: Buildroot 2023.02.9
	I1213 20:23:59.588297   79820 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-12353/.minikube/addons for local assets ...
	I1213 20:23:59.588362   79820 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-12353/.minikube/files for local assets ...
	I1213 20:23:59.588431   79820 filesync.go:149] local asset: /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem -> 195442.pem in /etc/ssl/certs
	I1213 20:23:59.588562   79820 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 20:23:59.601947   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem --> /etc/ssl/certs/195442.pem (1708 bytes)
	I1213 20:23:59.631405   79820 start.go:296] duration metric: took 135.398616ms for postStartSetup
	I1213 20:23:59.631454   79820 fix.go:56] duration metric: took 21.330020412s for fixHost
	I1213 20:23:59.631480   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:59.634516   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.634952   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.635000   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.635198   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:59.635387   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.635543   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.635691   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:59.635840   79820 main.go:141] libmachine: Using SSH client type: native
	I1213 20:23:59.636070   79820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I1213 20:23:59.636084   79820 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 20:23:59.749289   79820 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734121439.718006490
	
	I1213 20:23:59.749313   79820 fix.go:216] guest clock: 1734121439.718006490
	I1213 20:23:59.749322   79820 fix.go:229] Guest: 2024-12-13 20:23:59.71800649 +0000 UTC Remote: 2024-12-13 20:23:59.631459768 +0000 UTC m=+21.470518452 (delta=86.546722ms)
	I1213 20:23:59.749347   79820 fix.go:200] guest clock delta is within tolerance: 86.546722ms
	I1213 20:23:59.749361   79820 start.go:83] releasing machines lock for "newest-cni-535459", held for 21.447944205s
	I1213 20:23:59.749385   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:59.749655   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetIP
	I1213 20:23:59.752968   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.753402   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.753426   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.753606   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:59.754075   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:59.754269   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:23:59.754364   79820 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 20:23:59.754400   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:59.754690   79820 ssh_runner.go:195] Run: cat /version.json
	I1213 20:23:59.754714   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:23:59.757878   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.767628   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.767685   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.768022   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.768079   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:59.768303   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:23:59.768325   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:23:59.768458   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:23:59.768631   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.768681   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:23:59.768814   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:59.768849   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:23:59.769016   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:23:59.769027   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:23:59.888086   79820 ssh_runner.go:195] Run: systemctl --version
	I1213 20:23:59.899362   79820 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 20:24:00.063446   79820 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 20:24:00.072249   79820 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 20:24:00.072336   79820 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 20:24:00.093748   79820 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 20:24:00.093780   79820 start.go:495] detecting cgroup driver to use...
	I1213 20:24:00.093849   79820 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 20:24:00.117356   79820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 20:24:00.135377   79820 docker.go:217] disabling cri-docker service (if available) ...
	I1213 20:24:00.135437   79820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 20:24:00.155178   79820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 20:24:00.171890   79820 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 20:24:00.321669   79820 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 20:24:00.533366   79820 docker.go:233] disabling docker service ...
	I1213 20:24:00.533432   79820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 20:24:00.551511   79820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 20:24:00.569283   79820 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 20:24:00.748948   79820 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 20:24:00.924287   79820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 20:24:00.938559   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 20:24:00.958306   79820 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1213 20:24:00.958394   79820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:00.968592   79820 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 20:24:00.968667   79820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:00.979213   79820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:00.993825   79820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:01.004141   79820 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 20:24:01.015195   79820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:01.025731   79820 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:01.048789   79820 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 20:24:01.062542   79820 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 20:24:01.074137   79820 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 20:24:01.074218   79820 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 20:24:01.091233   79820 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 20:24:01.103721   79820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:24:01.274965   79820 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 20:24:01.400580   79820 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 20:24:01.400700   79820 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 20:24:01.406514   79820 start.go:563] Will wait 60s for crictl version
	I1213 20:24:01.406581   79820 ssh_runner.go:195] Run: which crictl
	I1213 20:24:01.411798   79820 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 20:24:01.463581   79820 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 20:24:01.463672   79820 ssh_runner.go:195] Run: crio --version
	I1213 20:24:01.503505   79820 ssh_runner.go:195] Run: crio --version
	I1213 20:24:01.545804   79820 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1213 20:24:01.547133   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetIP
	I1213 20:24:01.550717   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:01.551167   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:24:01.551198   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:01.551399   79820 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1213 20:24:01.555655   79820 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 20:24:01.574604   79820 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 20:23:57.815345   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:23:57.830459   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:23:57.830536   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:23:57.867421   78367 cri.go:89] found id: ""
	I1213 20:23:57.867450   78367 logs.go:282] 0 containers: []
	W1213 20:23:57.867462   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:23:57.867470   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:23:57.867528   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:23:57.904972   78367 cri.go:89] found id: ""
	I1213 20:23:57.905010   78367 logs.go:282] 0 containers: []
	W1213 20:23:57.905021   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:23:57.905029   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:23:57.905092   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:23:57.951889   78367 cri.go:89] found id: ""
	I1213 20:23:57.951916   78367 logs.go:282] 0 containers: []
	W1213 20:23:57.951928   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:23:57.951936   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:23:57.952010   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:23:57.998664   78367 cri.go:89] found id: ""
	I1213 20:23:57.998697   78367 logs.go:282] 0 containers: []
	W1213 20:23:57.998708   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:23:57.998715   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:23:57.998772   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:23:58.047566   78367 cri.go:89] found id: ""
	I1213 20:23:58.047597   78367 logs.go:282] 0 containers: []
	W1213 20:23:58.047608   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:23:58.047625   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:23:58.047686   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:23:58.082590   78367 cri.go:89] found id: ""
	I1213 20:23:58.082619   78367 logs.go:282] 0 containers: []
	W1213 20:23:58.082629   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:23:58.082637   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:23:58.082694   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:23:58.125035   78367 cri.go:89] found id: ""
	I1213 20:23:58.125071   78367 logs.go:282] 0 containers: []
	W1213 20:23:58.125080   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:23:58.125087   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:23:58.125147   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:23:58.168019   78367 cri.go:89] found id: ""
	I1213 20:23:58.168049   78367 logs.go:282] 0 containers: []
	W1213 20:23:58.168060   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:23:58.168078   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:23:58.168092   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:23:58.268179   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:23:58.268212   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:23:58.303166   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:23:58.303192   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:23:58.393172   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:23:58.393206   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:23:58.393220   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:23:58.489198   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:23:58.489230   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:01.033661   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:01.047673   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:01.047747   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:01.089498   78367 cri.go:89] found id: ""
	I1213 20:24:01.089526   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.089536   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:01.089543   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:01.089605   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:01.130215   78367 cri.go:89] found id: ""
	I1213 20:24:01.130245   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.130256   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:01.130264   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:01.130326   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:01.177064   78367 cri.go:89] found id: ""
	I1213 20:24:01.177102   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.177119   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:01.177126   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:01.177187   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:01.231277   78367 cri.go:89] found id: ""
	I1213 20:24:01.231312   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.231324   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:01.231332   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:01.231395   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:01.277419   78367 cri.go:89] found id: ""
	I1213 20:24:01.277446   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.277456   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:01.277463   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:01.277519   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:01.322970   78367 cri.go:89] found id: ""
	I1213 20:24:01.322996   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.323007   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:01.323017   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:01.323087   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:01.369554   78367 cri.go:89] found id: ""
	I1213 20:24:01.369585   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.369596   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:01.369603   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:01.369661   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:01.411927   78367 cri.go:89] found id: ""
	I1213 20:24:01.411957   78367 logs.go:282] 0 containers: []
	W1213 20:24:01.411967   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:01.411987   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:01.412005   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:01.486061   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:01.486097   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:01.500644   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:01.500673   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:01.578266   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:01.578283   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:01.578293   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:01.575794   79820 kubeadm.go:883] updating cluster {Name:newest-cni-535459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:newest-cni-535459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 20:24:01.575963   79820 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 20:24:01.576035   79820 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 20:24:01.617299   79820 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1213 20:24:01.617414   79820 ssh_runner.go:195] Run: which lz4
	I1213 20:24:01.621480   79820 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 20:24:01.625517   79820 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 20:24:01.625563   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1213 20:24:03.034691   79820 crio.go:462] duration metric: took 1.413259837s to copy over tarball
	I1213 20:24:03.034768   79820 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 20:24:01.481491   77510 addons.go:510] duration metric: took 3.281543559s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1213 20:24:02.601672   77510 pod_ready.go:103] pod "coredns-7c65d6cfc9-kl689" in "kube-system" namespace has status "Ready":"False"
	I1213 20:24:01.687325   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:01.687362   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:04.239043   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:04.252218   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:04.252292   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:04.294778   78367 cri.go:89] found id: ""
	I1213 20:24:04.294810   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.294820   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:04.294828   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:04.294910   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:04.339012   78367 cri.go:89] found id: ""
	I1213 20:24:04.339049   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.339061   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:04.339069   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:04.339134   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:04.391028   78367 cri.go:89] found id: ""
	I1213 20:24:04.391064   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.391076   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:04.391084   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:04.391147   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:04.436260   78367 cri.go:89] found id: ""
	I1213 20:24:04.436291   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.436308   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:04.436316   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:04.436372   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:04.485225   78367 cri.go:89] found id: ""
	I1213 20:24:04.485255   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.485274   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:04.485283   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:04.485347   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:04.527198   78367 cri.go:89] found id: ""
	I1213 20:24:04.527228   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.527239   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:04.527247   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:04.527306   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:04.567885   78367 cri.go:89] found id: ""
	I1213 20:24:04.567915   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.567926   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:04.567934   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:04.567984   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:04.608495   78367 cri.go:89] found id: ""
	I1213 20:24:04.608535   78367 logs.go:282] 0 containers: []
	W1213 20:24:04.608546   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:04.608557   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:04.608571   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:04.691701   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:04.691735   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:04.739203   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:04.739236   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:04.815994   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:04.816050   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:04.851237   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:04.851277   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:04.994736   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:05.429979   79820 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.395156779s)
	I1213 20:24:05.430008   79820 crio.go:469] duration metric: took 2.395289211s to extract the tarball
	I1213 20:24:05.430017   79820 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 20:24:05.486315   79820 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 20:24:05.546704   79820 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 20:24:05.546729   79820 cache_images.go:84] Images are preloaded, skipping loading
	I1213 20:24:05.546737   79820 kubeadm.go:934] updating node { 192.168.50.11 8443 v1.31.2 crio true true} ...
	I1213 20:24:05.546882   79820 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-535459 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-535459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 20:24:05.546997   79820 ssh_runner.go:195] Run: crio config
	I1213 20:24:05.617708   79820 cni.go:84] Creating CNI manager for ""
	I1213 20:24:05.617734   79820 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:24:05.617757   79820 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1213 20:24:05.617784   79820 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.11 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-535459 NodeName:newest-cni-535459 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 20:24:05.617925   79820 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-535459"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.11"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.11"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 20:24:05.618013   79820 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 20:24:05.631181   79820 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 20:24:05.631261   79820 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 20:24:05.642971   79820 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1213 20:24:05.662761   79820 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 20:24:05.682676   79820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I1213 20:24:05.706170   79820 ssh_runner.go:195] Run: grep 192.168.50.11	control-plane.minikube.internal$ /etc/hosts
	I1213 20:24:05.710946   79820 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 20:24:05.733291   79820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:24:05.878920   79820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 20:24:05.899390   79820 certs.go:68] Setting up /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459 for IP: 192.168.50.11
	I1213 20:24:05.899419   79820 certs.go:194] generating shared ca certs ...
	I1213 20:24:05.899438   79820 certs.go:226] acquiring lock for ca certs: {Name:mka8994129240986519f4b0ac41f1e4e27ada985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:24:05.899615   79820 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key
	I1213 20:24:05.899668   79820 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key
	I1213 20:24:05.899681   79820 certs.go:256] generating profile certs ...
	I1213 20:24:05.899786   79820 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/client.key
	I1213 20:24:05.899867   79820 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/apiserver.key.6c5572a8
	I1213 20:24:05.899919   79820 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/proxy-client.key
	I1213 20:24:05.900072   79820 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544.pem (1338 bytes)
	W1213 20:24:05.900112   79820 certs.go:480] ignoring /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544_empty.pem, impossibly tiny 0 bytes
	I1213 20:24:05.900124   79820 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca-key.pem (1679 bytes)
	I1213 20:24:05.900156   79820 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/ca.pem (1082 bytes)
	I1213 20:24:05.900187   79820 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/cert.pem (1123 bytes)
	I1213 20:24:05.900215   79820 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/certs/key.pem (1675 bytes)
	I1213 20:24:05.900269   79820 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem (1708 bytes)
	I1213 20:24:05.901141   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 20:24:05.939874   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 20:24:05.978129   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 20:24:06.014027   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 20:24:06.054231   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 20:24:06.082617   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 20:24:06.113846   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 20:24:06.160961   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/newest-cni-535459/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 20:24:06.186616   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/certs/19544.pem --> /usr/share/ca-certificates/19544.pem (1338 bytes)
	I1213 20:24:06.210814   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/ssl/certs/195442.pem --> /usr/share/ca-certificates/195442.pem (1708 bytes)
	I1213 20:24:06.235875   79820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-12353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 20:24:06.268351   79820 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 20:24:06.289062   79820 ssh_runner.go:195] Run: openssl version
	I1213 20:24:06.295624   79820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19544.pem && ln -fs /usr/share/ca-certificates/19544.pem /etc/ssl/certs/19544.pem"
	I1213 20:24:06.309685   79820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19544.pem
	I1213 20:24:06.314119   79820 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 19:13 /usr/share/ca-certificates/19544.pem
	I1213 20:24:06.314222   79820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19544.pem
	I1213 20:24:06.320247   79820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19544.pem /etc/ssl/certs/51391683.0"
	I1213 20:24:06.331949   79820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/195442.pem && ln -fs /usr/share/ca-certificates/195442.pem /etc/ssl/certs/195442.pem"
	I1213 20:24:06.343731   79820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/195442.pem
	I1213 20:24:06.348018   79820 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 19:13 /usr/share/ca-certificates/195442.pem
	I1213 20:24:06.348081   79820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/195442.pem
	I1213 20:24:06.353554   79820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/195442.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 20:24:06.366858   79820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 20:24:06.377728   79820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:24:06.382326   79820 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:24:06.382401   79820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 20:24:06.390103   79820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 20:24:06.404838   79820 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 20:24:06.410770   79820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 20:24:06.422025   79820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 20:24:06.431833   79820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 20:24:06.438647   79820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 20:24:06.444814   79820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 20:24:06.452219   79820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 20:24:06.458272   79820 kubeadm.go:392] StartCluster: {Name:newest-cni-535459 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:newest-cni-535459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil>
ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 20:24:06.458424   79820 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 20:24:06.458491   79820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 20:24:06.506732   79820 cri.go:89] found id: ""
	I1213 20:24:06.506810   79820 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 20:24:06.518343   79820 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1213 20:24:06.518376   79820 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1213 20:24:06.518430   79820 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 20:24:06.531209   79820 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 20:24:06.532070   79820 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-535459" does not appear in /home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:24:06.532572   79820 kubeconfig.go:62] /home/jenkins/minikube-integration/20090-12353/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-535459" cluster setting kubeconfig missing "newest-cni-535459" context setting]
	I1213 20:24:06.533290   79820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/kubeconfig: {Name:mkeeacf16d2513309766df13b67a96dd252bc4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:24:06.539651   79820 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 20:24:06.550828   79820 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.11
	I1213 20:24:06.550886   79820 kubeadm.go:1160] stopping kube-system containers ...
	I1213 20:24:06.550902   79820 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 20:24:06.550970   79820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 20:24:06.612618   79820 cri.go:89] found id: ""
	I1213 20:24:06.612750   79820 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 20:24:06.636007   79820 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:24:06.648489   79820 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:24:06.648512   79820 kubeadm.go:157] found existing configuration files:
	
	I1213 20:24:06.648563   79820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:24:06.660079   79820 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:24:06.660154   79820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:24:06.672333   79820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:24:06.683617   79820 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:24:06.683683   79820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:24:06.695818   79820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:24:06.706996   79820 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:24:06.707073   79820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:24:06.718672   79820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:24:06.729768   79820 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:24:06.729838   79820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:24:06.742002   79820 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 20:24:06.754184   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:24:07.010247   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:24:08.064932   79820 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.054652155s)
	I1213 20:24:08.064963   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:24:05.014076   77510 pod_ready.go:103] pod "coredns-7c65d6cfc9-kl689" in "kube-system" namespace has status "Ready":"False"
	I1213 20:24:06.021280   77510 pod_ready.go:93] pod "coredns-7c65d6cfc9-kl689" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:06.021310   77510 pod_ready.go:82] duration metric: took 7.514875372s for pod "coredns-7c65d6cfc9-kl689" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.021326   77510 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sk656" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.035861   77510 pod_ready.go:93] pod "coredns-7c65d6cfc9-sk656" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:06.035888   77510 pod_ready.go:82] duration metric: took 14.555021ms for pod "coredns-7c65d6cfc9-sk656" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.035900   77510 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.979006   77510 pod_ready.go:93] pod "etcd-default-k8s-diff-port-355668" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:06.979035   77510 pod_ready.go:82] duration metric: took 943.126351ms for pod "etcd-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.979049   77510 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.989635   77510 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-355668" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:06.989665   77510 pod_ready.go:82] duration metric: took 10.607567ms for pod "kube-apiserver-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.989677   77510 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.999141   77510 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-355668" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:06.999235   77510 pod_ready.go:82] duration metric: took 9.54585ms for pod "kube-controller-manager-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:06.999273   77510 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vjsf7" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:07.012290   77510 pod_ready.go:93] pod "kube-proxy-vjsf7" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:07.012314   77510 pod_ready.go:82] duration metric: took 13.004089ms for pod "kube-proxy-vjsf7" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:07.012327   77510 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:07.842063   77510 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-355668" in "kube-system" namespace has status "Ready":"True"
	I1213 20:24:07.842088   77510 pod_ready.go:82] duration metric: took 829.753011ms for pod "kube-scheduler-default-k8s-diff-port-355668" in "kube-system" namespace to be "Ready" ...
	I1213 20:24:07.842099   77510 pod_ready.go:39] duration metric: took 9.346942648s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 20:24:07.842114   77510 api_server.go:52] waiting for apiserver process to appear ...
	I1213 20:24:07.842174   77510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:07.858079   77510 api_server.go:72] duration metric: took 9.658239691s to wait for apiserver process to appear ...
	I1213 20:24:07.858107   77510 api_server.go:88] waiting for apiserver healthz status ...
	I1213 20:24:07.858133   77510 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8444/healthz ...
	I1213 20:24:07.864534   77510 api_server.go:279] https://192.168.39.233:8444/healthz returned 200:
	ok
	I1213 20:24:07.865713   77510 api_server.go:141] control plane version: v1.31.2
	I1213 20:24:07.865744   77510 api_server.go:131] duration metric: took 7.628649ms to wait for apiserver health ...
	I1213 20:24:07.865758   77510 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 20:24:07.872447   77510 system_pods.go:59] 9 kube-system pods found
	I1213 20:24:07.872473   77510 system_pods.go:61] "coredns-7c65d6cfc9-kl689" [37fe56ef-63a9-4777-87e0-495d71277e32] Running
	I1213 20:24:07.872478   77510 system_pods.go:61] "coredns-7c65d6cfc9-sk656" [f3071d78-0070-472d-a0e2-2ce271a37c20] Running
	I1213 20:24:07.872482   77510 system_pods.go:61] "etcd-default-k8s-diff-port-355668" [c8d8c66d-39e0-4b19-a3f2-63d5a66e05e9] Running
	I1213 20:24:07.872486   77510 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-355668" [77c99748-98ec-47a4-85d2-a2908f14c29b] Running
	I1213 20:24:07.872490   77510 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-355668" [44186a3f-4958-4b0c-82ae-48959fad9597] Running
	I1213 20:24:07.872492   77510 system_pods.go:61] "kube-proxy-vjsf7" [fcb2ebe1-bd40-48e1-8f88-a667f9f07d15] Running
	I1213 20:24:07.872496   77510 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-355668" [8184208a-8949-4050-abac-4fcc78237ecf] Running
	I1213 20:24:07.872502   77510 system_pods.go:61] "metrics-server-6867b74b74-8qvr9" [e67db0c2-4c1a-46a1-a61f-103019663d57] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 20:24:07.872507   77510 system_pods.go:61] "storage-provisioner" [c9bd91ad-91f6-44ec-a845-f9accf0261e1] Running
	I1213 20:24:07.872518   77510 system_pods.go:74] duration metric: took 6.753419ms to wait for pod list to return data ...
	I1213 20:24:07.872532   77510 default_sa.go:34] waiting for default service account to be created ...
	I1213 20:24:07.875714   77510 default_sa.go:45] found service account: "default"
	I1213 20:24:07.875737   77510 default_sa.go:55] duration metric: took 3.19796ms for default service account to be created ...
	I1213 20:24:07.875748   77510 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 20:24:07.881451   77510 system_pods.go:86] 9 kube-system pods found
	I1213 20:24:07.881474   77510 system_pods.go:89] "coredns-7c65d6cfc9-kl689" [37fe56ef-63a9-4777-87e0-495d71277e32] Running
	I1213 20:24:07.881480   77510 system_pods.go:89] "coredns-7c65d6cfc9-sk656" [f3071d78-0070-472d-a0e2-2ce271a37c20] Running
	I1213 20:24:07.881484   77510 system_pods.go:89] "etcd-default-k8s-diff-port-355668" [c8d8c66d-39e0-4b19-a3f2-63d5a66e05e9] Running
	I1213 20:24:07.881489   77510 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-355668" [77c99748-98ec-47a4-85d2-a2908f14c29b] Running
	I1213 20:24:07.881493   77510 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-355668" [44186a3f-4958-4b0c-82ae-48959fad9597] Running
	I1213 20:24:07.881496   77510 system_pods.go:89] "kube-proxy-vjsf7" [fcb2ebe1-bd40-48e1-8f88-a667f9f07d15] Running
	I1213 20:24:07.881500   77510 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-355668" [8184208a-8949-4050-abac-4fcc78237ecf] Running
	I1213 20:24:07.881507   77510 system_pods.go:89] "metrics-server-6867b74b74-8qvr9" [e67db0c2-4c1a-46a1-a61f-103019663d57] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 20:24:07.881512   77510 system_pods.go:89] "storage-provisioner" [c9bd91ad-91f6-44ec-a845-f9accf0261e1] Running
	I1213 20:24:07.881519   77510 system_pods.go:126] duration metric: took 5.765842ms to wait for k8s-apps to be running ...
	I1213 20:24:07.881529   77510 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 20:24:07.881576   77510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:24:07.896968   77510 system_svc.go:56] duration metric: took 15.429735ms WaitForService to wait for kubelet
	I1213 20:24:07.897000   77510 kubeadm.go:582] duration metric: took 9.69716545s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 20:24:07.897023   77510 node_conditions.go:102] verifying NodePressure condition ...
	I1213 20:24:08.181918   77510 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 20:24:08.181946   77510 node_conditions.go:123] node cpu capacity is 2
	I1213 20:24:08.181959   77510 node_conditions.go:105] duration metric: took 284.930197ms to run NodePressure ...
	I1213 20:24:08.181973   77510 start.go:241] waiting for startup goroutines ...
	I1213 20:24:08.181983   77510 start.go:246] waiting for cluster config update ...
	I1213 20:24:08.181997   77510 start.go:255] writing updated cluster config ...
	I1213 20:24:08.257251   77510 ssh_runner.go:195] Run: rm -f paused
	I1213 20:24:08.310968   77510 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1213 20:24:08.560633   77510 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-355668" cluster and "default" namespace by default
	I1213 20:24:07.495945   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:07.509565   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:07.509640   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:07.548332   78367 cri.go:89] found id: ""
	I1213 20:24:07.548357   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.548365   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:07.548371   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:07.548417   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:07.585718   78367 cri.go:89] found id: ""
	I1213 20:24:07.585745   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.585752   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:07.585758   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:07.585816   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:07.620441   78367 cri.go:89] found id: ""
	I1213 20:24:07.620470   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.620478   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:07.620485   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:07.620543   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:07.654638   78367 cri.go:89] found id: ""
	I1213 20:24:07.654671   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.654682   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:07.654690   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:07.654752   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:07.690251   78367 cri.go:89] found id: ""
	I1213 20:24:07.690279   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.690289   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:07.690296   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:07.690362   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:07.733229   78367 cri.go:89] found id: ""
	I1213 20:24:07.733260   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.733268   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:07.733274   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:07.733325   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:07.767187   78367 cri.go:89] found id: ""
	I1213 20:24:07.767218   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.767229   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:07.767237   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:07.767309   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:07.803454   78367 cri.go:89] found id: ""
	I1213 20:24:07.803477   78367 logs.go:282] 0 containers: []
	W1213 20:24:07.803485   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:07.803493   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:07.803504   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:07.884578   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:07.884602   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:07.884616   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:07.966402   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:07.966448   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:08.010335   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:08.010368   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:08.064614   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:08.064647   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:10.580540   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:10.597959   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:10.598030   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:10.667638   78367 cri.go:89] found id: ""
	I1213 20:24:10.667665   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.667675   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:10.667683   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:10.667739   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:10.728894   78367 cri.go:89] found id: ""
	I1213 20:24:10.728918   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.728929   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:10.728936   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:10.728992   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:10.771954   78367 cri.go:89] found id: ""
	I1213 20:24:10.771991   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.772001   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:10.772009   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:10.772067   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:10.818154   78367 cri.go:89] found id: ""
	I1213 20:24:10.818181   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.818188   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:10.818193   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:10.818240   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:10.858974   78367 cri.go:89] found id: ""
	I1213 20:24:10.859003   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.859014   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:10.859021   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:10.859086   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:10.908481   78367 cri.go:89] found id: ""
	I1213 20:24:10.908511   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.908524   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:10.908532   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:10.908604   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:10.944951   78367 cri.go:89] found id: ""
	I1213 20:24:10.944979   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.944987   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:10.945001   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:10.945064   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:10.979563   78367 cri.go:89] found id: ""
	I1213 20:24:10.979588   78367 logs.go:282] 0 containers: []
	W1213 20:24:10.979596   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:10.979604   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:10.979616   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:11.052472   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:11.052507   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:11.068916   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:11.068947   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:11.146800   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:11.146826   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:11.146839   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:11.248307   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:11.248347   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:08.321808   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:24:08.374083   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:24:08.441322   79820 api_server.go:52] waiting for apiserver process to appear ...
	I1213 20:24:08.441414   79820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:08.942600   79820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:09.441659   79820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:09.480026   79820 api_server.go:72] duration metric: took 1.038702713s to wait for apiserver process to appear ...
	I1213 20:24:09.480059   79820 api_server.go:88] waiting for apiserver healthz status ...
	I1213 20:24:09.480084   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:09.480678   79820 api_server.go:269] stopped: https://192.168.50.11:8443/healthz: Get "https://192.168.50.11:8443/healthz": dial tcp 192.168.50.11:8443: connect: connection refused
	I1213 20:24:09.980257   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:12.178320   79820 api_server.go:279] https://192.168.50.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 20:24:12.178365   79820 api_server.go:103] status: https://192.168.50.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 20:24:12.178382   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:12.185253   79820 api_server.go:279] https://192.168.50.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 20:24:12.185281   79820 api_server.go:103] status: https://192.168.50.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 20:24:12.480680   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:12.491410   79820 api_server.go:279] https://192.168.50.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 20:24:12.491444   79820 api_server.go:103] status: https://192.168.50.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 20:24:12.981159   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:12.986141   79820 api_server.go:279] https://192.168.50.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 20:24:12.986171   79820 api_server.go:103] status: https://192.168.50.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 20:24:13.480205   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:13.485225   79820 api_server.go:279] https://192.168.50.11:8443/healthz returned 200:
	ok
	I1213 20:24:13.494430   79820 api_server.go:141] control plane version: v1.31.2
	I1213 20:24:13.494452   79820 api_server.go:131] duration metric: took 4.014386318s to wait for apiserver health ...
	I1213 20:24:13.494460   79820 cni.go:84] Creating CNI manager for ""
	I1213 20:24:13.494465   79820 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 20:24:13.496012   79820 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 20:24:13.497376   79820 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 20:24:13.511144   79820 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 20:24:13.533969   79820 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 20:24:13.556295   79820 system_pods.go:59] 8 kube-system pods found
	I1213 20:24:13.556338   79820 system_pods.go:61] "coredns-7c65d6cfc9-q6mqc" [9f65c257-99b6-466f-91ae-9676625eb834] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 20:24:13.556349   79820 system_pods.go:61] "etcd-newest-cni-535459" [b491d2e0-2d34-4f0b-abf3-1d212ba9f422] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 20:24:13.556359   79820 system_pods.go:61] "kube-apiserver-newest-cni-535459" [6aeeeaed-b2ec-4c7d-ac94-215b57c0bd45] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 20:24:13.556368   79820 system_pods.go:61] "kube-controller-manager-newest-cni-535459" [51cd3848-17b3-493a-87db-d16192d55533] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 20:24:13.556384   79820 system_pods.go:61] "kube-proxy-msh9m" [e538f898-3a04-4e6f-bbf2-fc7fb13b43f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 20:24:13.556397   79820 system_pods.go:61] "kube-scheduler-newest-cni-535459" [90d47a04-6a40-461b-a19e-cc3d8a7b92ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 20:24:13.556406   79820 system_pods.go:61] "metrics-server-6867b74b74-29j2k" [cb907d37-be2a-4579-ba77-9c5add245ec1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 20:24:13.556420   79820 system_pods.go:61] "storage-provisioner" [de0598b8-996f-4307-b6c8-e81fa10d6f47] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 20:24:13.556432   79820 system_pods.go:74] duration metric: took 22.427974ms to wait for pod list to return data ...
	I1213 20:24:13.556444   79820 node_conditions.go:102] verifying NodePressure condition ...
	I1213 20:24:13.563220   79820 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 20:24:13.563264   79820 node_conditions.go:123] node cpu capacity is 2
	I1213 20:24:13.563277   79820 node_conditions.go:105] duration metric: took 6.825662ms to run NodePressure ...
	I1213 20:24:13.563301   79820 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 20:24:13.855672   79820 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 20:24:13.870068   79820 ops.go:34] apiserver oom_adj: -16
	I1213 20:24:13.870105   79820 kubeadm.go:597] duration metric: took 7.351714184s to restartPrimaryControlPlane
	I1213 20:24:13.870119   79820 kubeadm.go:394] duration metric: took 7.411858052s to StartCluster
	I1213 20:24:13.870140   79820 settings.go:142] acquiring lock: {Name:mkc90da34b53323b31b6e69f8fab5ad7b1bdb254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:24:13.870220   79820 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:24:13.871661   79820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/kubeconfig: {Name:mkeeacf16d2513309766df13b67a96dd252bc4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 20:24:13.871898   79820 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 20:24:13.871961   79820 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 20:24:13.872063   79820 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-535459"
	I1213 20:24:13.872081   79820 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-535459"
	W1213 20:24:13.872093   79820 addons.go:243] addon storage-provisioner should already be in state true
	I1213 20:24:13.872124   79820 host.go:66] Checking if "newest-cni-535459" exists ...
	I1213 20:24:13.872109   79820 addons.go:69] Setting default-storageclass=true in profile "newest-cni-535459"
	I1213 20:24:13.872135   79820 addons.go:69] Setting metrics-server=true in profile "newest-cni-535459"
	I1213 20:24:13.872156   79820 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-535459"
	I1213 20:24:13.872143   79820 addons.go:69] Setting dashboard=true in profile "newest-cni-535459"
	I1213 20:24:13.872165   79820 addons.go:234] Setting addon metrics-server=true in "newest-cni-535459"
	I1213 20:24:13.872174   79820 config.go:182] Loaded profile config "newest-cni-535459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1213 20:24:13.872182   79820 addons.go:243] addon metrics-server should already be in state true
	I1213 20:24:13.872219   79820 host.go:66] Checking if "newest-cni-535459" exists ...
	I1213 20:24:13.872182   79820 addons.go:234] Setting addon dashboard=true in "newest-cni-535459"
	W1213 20:24:13.872286   79820 addons.go:243] addon dashboard should already be in state true
	I1213 20:24:13.872327   79820 host.go:66] Checking if "newest-cni-535459" exists ...
	I1213 20:24:13.872589   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.872598   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.872618   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.872634   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.872647   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.872667   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.872703   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.872640   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.874676   79820 out.go:177] * Verifying Kubernetes components...
	I1213 20:24:13.875998   79820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 20:24:13.893363   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I1213 20:24:13.893468   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I1213 20:24:13.893952   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.894024   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36147
	I1213 20:24:13.893961   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.894530   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.894709   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.894722   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.894862   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.894876   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.895087   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.895103   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.895161   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.895204   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.895380   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.895776   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.895816   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.896005   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetState
	I1213 20:24:13.896278   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44887
	I1213 20:24:13.896384   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.896414   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.896800   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.897325   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.897345   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.897762   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.898269   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.898302   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.899617   79820 addons.go:234] Setting addon default-storageclass=true in "newest-cni-535459"
	W1213 20:24:13.899633   79820 addons.go:243] addon default-storageclass should already be in state true
	I1213 20:24:13.899663   79820 host.go:66] Checking if "newest-cni-535459" exists ...
	I1213 20:24:13.900022   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.900056   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.916023   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37017
	I1213 20:24:13.916600   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.916836   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I1213 20:24:13.917124   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.917139   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.917211   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.917661   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.917682   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.917755   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.917969   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetState
	I1213 20:24:13.918150   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.918406   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetState
	I1213 20:24:13.920502   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:24:13.921252   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:24:13.922950   79820 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 20:24:13.922980   79820 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 20:24:13.924173   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34681
	I1213 20:24:13.924523   79820 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 20:24:13.924543   79820 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 20:24:13.924561   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:24:13.924812   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.925357   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.925375   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.925880   79820 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1213 20:24:13.926431   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.926644   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetState
	I1213 20:24:13.927129   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 20:24:13.927146   79820 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 20:24:13.927165   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:24:13.929247   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:24:13.930886   79820 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 20:24:13.794975   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:13.809490   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:13.809563   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:13.845247   78367 cri.go:89] found id: ""
	I1213 20:24:13.845312   78367 logs.go:282] 0 containers: []
	W1213 20:24:13.845326   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:13.845337   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:13.845404   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:13.891111   78367 cri.go:89] found id: ""
	I1213 20:24:13.891155   78367 logs.go:282] 0 containers: []
	W1213 20:24:13.891167   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:13.891174   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:13.891225   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:13.944404   78367 cri.go:89] found id: ""
	I1213 20:24:13.944423   78367 logs.go:282] 0 containers: []
	W1213 20:24:13.944431   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:13.944438   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:13.944479   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:13.982745   78367 cri.go:89] found id: ""
	I1213 20:24:13.982766   78367 logs.go:282] 0 containers: []
	W1213 20:24:13.982773   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:13.982779   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:13.982823   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:14.018505   78367 cri.go:89] found id: ""
	I1213 20:24:14.018537   78367 logs.go:282] 0 containers: []
	W1213 20:24:14.018547   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:14.018555   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:14.018622   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:14.053196   78367 cri.go:89] found id: ""
	I1213 20:24:14.053222   78367 logs.go:282] 0 containers: []
	W1213 20:24:14.053233   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:14.053241   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:14.053305   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:14.085486   78367 cri.go:89] found id: ""
	I1213 20:24:14.085516   78367 logs.go:282] 0 containers: []
	W1213 20:24:14.085526   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:14.085534   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:14.085600   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:14.123930   78367 cri.go:89] found id: ""
	I1213 20:24:14.123958   78367 logs.go:282] 0 containers: []
	W1213 20:24:14.123968   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:14.123979   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:14.123993   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:14.184665   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:14.184705   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:14.207707   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:14.207742   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:14.317989   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:14.318017   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:14.318037   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:14.440228   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:14.440275   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:13.932098   79820 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:24:13.932112   79820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 20:24:13.932127   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:24:13.934949   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.934951   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:24:13.934975   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.934995   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:24:13.935008   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.935027   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:24:13.935077   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:24:13.935093   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.935143   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.935167   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:24:13.935181   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.935304   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:24:13.935319   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:24:13.935304   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:24:13.935471   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:24:13.935503   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:24:13.935535   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:24:13.935695   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:24:13.935709   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:24:13.935690   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:24:13.936047   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:24:13.940133   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34937
	I1213 20:24:13.940516   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.940964   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.940980   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.941375   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.941957   79820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 20:24:13.941999   79820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 20:24:13.965055   79820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32863
	I1213 20:24:13.966122   79820 main.go:141] libmachine: () Calling .GetVersion
	I1213 20:24:13.966772   79820 main.go:141] libmachine: Using API Version  1
	I1213 20:24:13.966800   79820 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 20:24:13.967221   79820 main.go:141] libmachine: () Calling .GetMachineName
	I1213 20:24:13.967423   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetState
	I1213 20:24:13.969213   79820 main.go:141] libmachine: (newest-cni-535459) Calling .DriverName
	I1213 20:24:13.969387   79820 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 20:24:13.969404   79820 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 20:24:13.969424   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHHostname
	I1213 20:24:13.971994   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.972410   79820 main.go:141] libmachine: (newest-cni-535459) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:17:89", ip: ""} in network mk-newest-cni-535459: {Iface:virbr2 ExpiryTime:2024-12-13 21:23:49 +0000 UTC Type:0 Mac:52:54:00:7d:17:89 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:newest-cni-535459 Clientid:01:52:54:00:7d:17:89}
	I1213 20:24:13.972431   79820 main.go:141] libmachine: (newest-cni-535459) DBG | domain newest-cni-535459 has defined IP address 192.168.50.11 and MAC address 52:54:00:7d:17:89 in network mk-newest-cni-535459
	I1213 20:24:13.972569   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHPort
	I1213 20:24:13.972706   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHKeyPath
	I1213 20:24:13.972834   79820 main.go:141] libmachine: (newest-cni-535459) Calling .GetSSHUsername
	I1213 20:24:13.972937   79820 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/newest-cni-535459/id_rsa Username:docker}
	I1213 20:24:14.127383   79820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 20:24:14.156652   79820 api_server.go:52] waiting for apiserver process to appear ...
	I1213 20:24:14.156824   79820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:14.175603   79820 api_server.go:72] duration metric: took 303.674582ms to wait for apiserver process to appear ...
	I1213 20:24:14.175692   79820 api_server.go:88] waiting for apiserver healthz status ...
	I1213 20:24:14.175713   79820 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I1213 20:24:14.180066   79820 api_server.go:279] https://192.168.50.11:8443/healthz returned 200:
	ok
	I1213 20:24:14.181204   79820 api_server.go:141] control plane version: v1.31.2
	I1213 20:24:14.181224   79820 api_server.go:131] duration metric: took 5.524316ms to wait for apiserver health ...
	I1213 20:24:14.181240   79820 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 20:24:14.186870   79820 system_pods.go:59] 8 kube-system pods found
	I1213 20:24:14.186902   79820 system_pods.go:61] "coredns-7c65d6cfc9-q6mqc" [9f65c257-99b6-466f-91ae-9676625eb834] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 20:24:14.186913   79820 system_pods.go:61] "etcd-newest-cni-535459" [b491d2e0-2d34-4f0b-abf3-1d212ba9f422] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 20:24:14.186926   79820 system_pods.go:61] "kube-apiserver-newest-cni-535459" [6aeeeaed-b2ec-4c7d-ac94-215b57c0bd45] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 20:24:14.186935   79820 system_pods.go:61] "kube-controller-manager-newest-cni-535459" [51cd3848-17b3-493a-87db-d16192d55533] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 20:24:14.186942   79820 system_pods.go:61] "kube-proxy-msh9m" [e538f898-3a04-4e6f-bbf2-fc7fb13b43f4] Running
	I1213 20:24:14.186950   79820 system_pods.go:61] "kube-scheduler-newest-cni-535459" [90d47a04-6a40-461b-a19e-cc3d8a7b92ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 20:24:14.186958   79820 system_pods.go:61] "metrics-server-6867b74b74-29j2k" [cb907d37-be2a-4579-ba77-9c5add245ec1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 20:24:14.186963   79820 system_pods.go:61] "storage-provisioner" [de0598b8-996f-4307-b6c8-e81fa10d6f47] Running
	I1213 20:24:14.186970   79820 system_pods.go:74] duration metric: took 5.722864ms to wait for pod list to return data ...
	I1213 20:24:14.186978   79820 default_sa.go:34] waiting for default service account to be created ...
	I1213 20:24:14.191022   79820 default_sa.go:45] found service account: "default"
	I1213 20:24:14.191047   79820 default_sa.go:55] duration metric: took 4.057067ms for default service account to be created ...
	I1213 20:24:14.191062   79820 kubeadm.go:582] duration metric: took 319.136167ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 20:24:14.191078   79820 node_conditions.go:102] verifying NodePressure condition ...
	I1213 20:24:14.203724   79820 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 20:24:14.203754   79820 node_conditions.go:123] node cpu capacity is 2
	I1213 20:24:14.203765   79820 node_conditions.go:105] duration metric: took 12.682303ms to run NodePressure ...
	I1213 20:24:14.203779   79820 start.go:241] waiting for startup goroutines ...
	I1213 20:24:14.265979   79820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 20:24:14.322830   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 20:24:14.322892   79820 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 20:24:14.353048   79820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 20:24:14.355217   79820 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 20:24:14.355245   79820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 20:24:14.409641   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 20:24:14.409670   79820 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 20:24:14.425869   79820 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 20:24:14.425901   79820 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 20:24:14.489915   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 20:24:14.490017   79820 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 20:24:14.521997   79820 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 20:24:14.522024   79820 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 20:24:14.564655   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 20:24:14.564686   79820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 20:24:14.614041   79820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 20:24:14.641054   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 20:24:14.641084   79820 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 20:24:14.710567   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 20:24:14.710601   79820 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 20:24:14.745018   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 20:24:14.745055   79820 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 20:24:14.779553   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 20:24:14.779583   79820 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 20:24:14.893256   79820 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 20:24:14.893286   79820 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 20:24:14.933845   79820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 20:24:16.576729   79820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.310647345s)
	I1213 20:24:16.576794   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.576808   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.576827   79820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.223742976s)
	I1213 20:24:16.576868   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.576885   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.576966   79820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.962891887s)
	I1213 20:24:16.576995   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.577005   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.578358   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Closing plugin on server side
	I1213 20:24:16.578370   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Closing plugin on server side
	I1213 20:24:16.578382   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.578394   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Closing plugin on server side
	I1213 20:24:16.578394   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.578402   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.578413   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.578421   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.578424   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.578430   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.578432   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.578442   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.578457   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.578404   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.578486   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.578697   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Closing plugin on server side
	I1213 20:24:16.578728   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.578743   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.578825   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.578853   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.578862   79820 addons.go:475] Verifying addon metrics-server=true in "newest-cni-535459"
	I1213 20:24:16.578921   79820 main.go:141] libmachine: (newest-cni-535459) DBG | Closing plugin on server side
	I1213 20:24:16.578931   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.578944   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.624470   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.624501   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.624775   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.624793   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.847028   79820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.913138549s)
	I1213 20:24:16.847092   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.847111   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.847446   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.847467   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.847482   79820 main.go:141] libmachine: Making call to close driver server
	I1213 20:24:16.847491   79820 main.go:141] libmachine: (newest-cni-535459) Calling .Close
	I1213 20:24:16.847737   79820 main.go:141] libmachine: Successfully made call to close driver server
	I1213 20:24:16.847764   79820 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 20:24:16.849290   79820 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-535459 addons enable metrics-server
	
	I1213 20:24:16.850380   79820 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1213 20:24:16.851370   79820 addons.go:510] duration metric: took 2.979414999s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1213 20:24:16.851411   79820 start.go:246] waiting for cluster config update ...
	I1213 20:24:16.851425   79820 start.go:255] writing updated cluster config ...
	I1213 20:24:16.851676   79820 ssh_runner.go:195] Run: rm -f paused
	I1213 20:24:16.919885   79820 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1213 20:24:16.921326   79820 out.go:177] * Done! kubectl is now configured to use "newest-cni-535459" cluster and "default" namespace by default
	I1213 20:24:16.992002   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:17.010798   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:17.010887   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:17.054515   78367 cri.go:89] found id: ""
	I1213 20:24:17.054539   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.054548   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:17.054557   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:17.054608   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:17.106222   78367 cri.go:89] found id: ""
	I1213 20:24:17.106258   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.106269   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:17.106276   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:17.106328   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:17.145680   78367 cri.go:89] found id: ""
	I1213 20:24:17.145706   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.145713   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:17.145719   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:17.145772   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:17.183345   78367 cri.go:89] found id: ""
	I1213 20:24:17.183372   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.183383   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:17.183391   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:17.183440   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:17.218181   78367 cri.go:89] found id: ""
	I1213 20:24:17.218214   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.218226   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:17.218233   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:17.218308   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:17.260697   78367 cri.go:89] found id: ""
	I1213 20:24:17.260736   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.260747   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:17.260756   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:17.260815   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:17.296356   78367 cri.go:89] found id: ""
	I1213 20:24:17.296383   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.296394   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:17.296402   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:17.296452   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:17.332909   78367 cri.go:89] found id: ""
	I1213 20:24:17.332936   78367 logs.go:282] 0 containers: []
	W1213 20:24:17.332946   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:17.332956   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:17.332979   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:17.400328   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:17.400361   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:17.419802   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:17.419836   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:17.508687   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:17.508709   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:17.508724   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:17.594401   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:17.594433   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:20.132881   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:20.151309   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:20.151382   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:20.185818   78367 cri.go:89] found id: ""
	I1213 20:24:20.185845   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.185854   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:20.185862   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:20.185913   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:20.227855   78367 cri.go:89] found id: ""
	I1213 20:24:20.227885   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.227895   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:20.227902   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:20.227957   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:20.265126   78367 cri.go:89] found id: ""
	I1213 20:24:20.265149   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.265158   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:20.265165   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:20.265215   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:20.303082   78367 cri.go:89] found id: ""
	I1213 20:24:20.303100   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.303106   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:20.303112   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:20.303148   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:20.334523   78367 cri.go:89] found id: ""
	I1213 20:24:20.334554   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.334565   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:20.334573   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:20.334634   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:20.367872   78367 cri.go:89] found id: ""
	I1213 20:24:20.367904   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.367915   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:20.367922   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:20.367972   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:20.401025   78367 cri.go:89] found id: ""
	I1213 20:24:20.401053   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.401063   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:20.401071   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:20.401118   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:20.437198   78367 cri.go:89] found id: ""
	I1213 20:24:20.437224   78367 logs.go:282] 0 containers: []
	W1213 20:24:20.437232   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:20.437240   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:20.437252   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:20.491638   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:20.491670   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:20.507146   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:20.507176   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:20.586662   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:20.586708   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:20.586725   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:20.677650   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:20.677702   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:23.226457   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:23.240139   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:23.240197   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:23.276469   78367 cri.go:89] found id: ""
	I1213 20:24:23.276503   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.276514   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:23.276522   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:23.276576   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:23.321764   78367 cri.go:89] found id: ""
	I1213 20:24:23.321793   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.321804   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:23.321811   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:23.321860   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:23.355263   78367 cri.go:89] found id: ""
	I1213 20:24:23.355297   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.355308   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:23.355315   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:23.355368   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:23.396846   78367 cri.go:89] found id: ""
	I1213 20:24:23.396875   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.396885   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:23.396894   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:23.396955   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:23.435540   78367 cri.go:89] found id: ""
	I1213 20:24:23.435567   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.435578   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:23.435586   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:23.435634   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:23.473920   78367 cri.go:89] found id: ""
	I1213 20:24:23.473944   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.473959   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:23.473967   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:23.474023   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:23.507136   78367 cri.go:89] found id: ""
	I1213 20:24:23.507168   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.507177   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:23.507183   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:23.507239   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:23.539050   78367 cri.go:89] found id: ""
	I1213 20:24:23.539075   78367 logs.go:282] 0 containers: []
	W1213 20:24:23.539083   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:23.539091   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:23.539104   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:23.553000   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:23.553026   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:23.619106   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:23.619128   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:23.619143   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:23.704028   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:23.704065   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:23.740575   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:23.740599   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:26.290469   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:26.303070   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:26.303114   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:26.333881   78367 cri.go:89] found id: ""
	I1213 20:24:26.333902   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.333909   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:26.333915   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:26.333957   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:26.367218   78367 cri.go:89] found id: ""
	I1213 20:24:26.367246   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.367253   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:26.367258   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:26.367314   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:26.397281   78367 cri.go:89] found id: ""
	I1213 20:24:26.397313   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.397325   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:26.397332   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:26.397388   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:26.429238   78367 cri.go:89] found id: ""
	I1213 20:24:26.429260   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.429270   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:26.429290   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:26.429335   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:26.457723   78367 cri.go:89] found id: ""
	I1213 20:24:26.457751   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.457760   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:26.457765   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:26.457820   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:26.487066   78367 cri.go:89] found id: ""
	I1213 20:24:26.487086   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.487093   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:26.487098   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:26.487153   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:26.517336   78367 cri.go:89] found id: ""
	I1213 20:24:26.517360   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.517367   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:26.517373   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:26.517428   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:26.547918   78367 cri.go:89] found id: ""
	I1213 20:24:26.547940   78367 logs.go:282] 0 containers: []
	W1213 20:24:26.547947   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:26.547955   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:26.547966   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:26.614500   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:26.614527   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:26.614541   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:26.688954   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:26.688983   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:26.723430   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:26.723453   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:26.771679   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:26.771707   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:29.284113   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:29.296309   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:29.296365   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:29.335369   78367 cri.go:89] found id: ""
	I1213 20:24:29.335395   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.335404   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:29.335411   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:29.335477   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:29.364958   78367 cri.go:89] found id: ""
	I1213 20:24:29.364996   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.365005   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:29.365011   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:29.365056   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:29.395763   78367 cri.go:89] found id: ""
	I1213 20:24:29.395785   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.395792   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:29.395798   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:29.395847   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:29.426100   78367 cri.go:89] found id: ""
	I1213 20:24:29.426131   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.426141   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:29.426148   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:29.426212   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:29.454982   78367 cri.go:89] found id: ""
	I1213 20:24:29.455011   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.455018   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:29.455025   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:29.455086   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:29.490059   78367 cri.go:89] found id: ""
	I1213 20:24:29.490088   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.490098   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:29.490105   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:29.490164   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:29.523139   78367 cri.go:89] found id: ""
	I1213 20:24:29.523170   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.523179   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:29.523184   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:29.523235   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:29.553382   78367 cri.go:89] found id: ""
	I1213 20:24:29.553411   78367 logs.go:282] 0 containers: []
	W1213 20:24:29.553422   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:29.553432   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:29.553445   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:29.603370   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:29.603399   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:29.615270   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:29.615296   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:29.676210   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:29.676241   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:29.676256   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:29.748591   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:29.748620   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:32.283657   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:32.295699   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:32.295770   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:32.326072   78367 cri.go:89] found id: ""
	I1213 20:24:32.326100   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.326109   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:32.326116   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:32.326174   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:32.359219   78367 cri.go:89] found id: ""
	I1213 20:24:32.359267   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.359279   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:32.359287   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:32.359374   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:32.389664   78367 cri.go:89] found id: ""
	I1213 20:24:32.389687   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.389694   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:32.389700   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:32.389756   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:32.419871   78367 cri.go:89] found id: ""
	I1213 20:24:32.419893   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.419899   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:32.419904   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:32.419955   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:32.449254   78367 cri.go:89] found id: ""
	I1213 20:24:32.449282   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.449292   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:32.449300   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:32.449359   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:32.477857   78367 cri.go:89] found id: ""
	I1213 20:24:32.477887   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.477897   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:32.477905   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:32.477965   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:32.507395   78367 cri.go:89] found id: ""
	I1213 20:24:32.507420   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.507429   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:32.507437   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:32.507493   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:32.536846   78367 cri.go:89] found id: ""
	I1213 20:24:32.536882   78367 logs.go:282] 0 containers: []
	W1213 20:24:32.536894   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:32.536904   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:32.536918   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:32.586510   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:32.586540   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:32.598914   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:32.598941   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:32.661653   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:32.661673   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:32.661686   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:32.738149   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:32.738180   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:35.274525   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:35.287259   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:35.287338   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:35.321233   78367 cri.go:89] found id: ""
	I1213 20:24:35.321269   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.321280   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:35.321287   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:35.321350   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:35.351512   78367 cri.go:89] found id: ""
	I1213 20:24:35.351535   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.351543   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:35.351549   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:35.351607   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:35.380770   78367 cri.go:89] found id: ""
	I1213 20:24:35.380795   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.380805   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:35.380812   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:35.380868   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:35.410311   78367 cri.go:89] found id: ""
	I1213 20:24:35.410339   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.410348   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:35.410356   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:35.410410   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:35.437955   78367 cri.go:89] found id: ""
	I1213 20:24:35.437979   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.437987   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:35.437992   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:35.438039   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:35.467621   78367 cri.go:89] found id: ""
	I1213 20:24:35.467646   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.467657   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:35.467665   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:35.467729   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:35.496779   78367 cri.go:89] found id: ""
	I1213 20:24:35.496801   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.496809   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:35.496814   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:35.496867   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:35.527107   78367 cri.go:89] found id: ""
	I1213 20:24:35.527140   78367 logs.go:282] 0 containers: []
	W1213 20:24:35.527148   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:35.527157   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:35.527167   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:35.573444   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:35.573472   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:35.586107   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:35.586129   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:35.647226   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:35.647249   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:35.647261   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:35.721264   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:35.721297   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:38.256983   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:38.269600   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:38.269665   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:38.304526   78367 cri.go:89] found id: ""
	I1213 20:24:38.304552   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.304559   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:38.304566   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:38.304621   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:38.334858   78367 cri.go:89] found id: ""
	I1213 20:24:38.334885   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.334896   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:38.334902   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:38.334959   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:38.364281   78367 cri.go:89] found id: ""
	I1213 20:24:38.364305   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.364312   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:38.364318   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:38.364364   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:38.393853   78367 cri.go:89] found id: ""
	I1213 20:24:38.393878   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.393886   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:38.393892   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:38.393936   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:38.424196   78367 cri.go:89] found id: ""
	I1213 20:24:38.424225   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.424234   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:38.424241   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:38.424305   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:38.454285   78367 cri.go:89] found id: ""
	I1213 20:24:38.454311   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.454322   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:38.454330   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:38.454382   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:38.483158   78367 cri.go:89] found id: ""
	I1213 20:24:38.483187   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.483194   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:38.483199   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:38.483250   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:38.512116   78367 cri.go:89] found id: ""
	I1213 20:24:38.512149   78367 logs.go:282] 0 containers: []
	W1213 20:24:38.512161   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:38.512172   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:38.512186   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:38.587026   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:38.587053   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:38.587069   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:38.661024   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:38.661055   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:38.695893   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:38.695922   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:38.746253   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:38.746282   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:41.258578   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:41.271632   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:41.271691   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:41.303047   78367 cri.go:89] found id: ""
	I1213 20:24:41.303073   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.303081   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:41.303087   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:41.303149   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:41.334605   78367 cri.go:89] found id: ""
	I1213 20:24:41.334642   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.334653   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:41.334662   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:41.334714   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:41.367617   78367 cri.go:89] found id: ""
	I1213 20:24:41.367650   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.367661   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:41.367670   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:41.367724   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:41.399772   78367 cri.go:89] found id: ""
	I1213 20:24:41.399800   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.399811   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:41.399819   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:41.399880   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:41.431833   78367 cri.go:89] found id: ""
	I1213 20:24:41.431869   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.431879   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:41.431887   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:41.431948   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:41.462640   78367 cri.go:89] found id: ""
	I1213 20:24:41.462669   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.462679   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:41.462688   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:41.462757   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:41.492716   78367 cri.go:89] found id: ""
	I1213 20:24:41.492748   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.492758   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:41.492764   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:41.492823   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:41.527697   78367 cri.go:89] found id: ""
	I1213 20:24:41.527729   78367 logs.go:282] 0 containers: []
	W1213 20:24:41.527739   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:41.527750   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:41.527763   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:41.540507   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:41.540530   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:41.602837   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:41.602873   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:41.602888   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:41.676818   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:41.676855   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:41.713699   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:41.713731   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:44.263397   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:44.275396   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:44.275463   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:44.306065   78367 cri.go:89] found id: ""
	I1213 20:24:44.306095   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.306106   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:44.306114   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:44.306170   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:44.336701   78367 cri.go:89] found id: ""
	I1213 20:24:44.336734   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.336746   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:44.336754   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:44.336803   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:44.367523   78367 cri.go:89] found id: ""
	I1213 20:24:44.367553   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.367564   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:44.367571   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:44.367626   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:44.397934   78367 cri.go:89] found id: ""
	I1213 20:24:44.397960   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.397970   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:44.397978   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:44.398043   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:44.428770   78367 cri.go:89] found id: ""
	I1213 20:24:44.428799   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.428810   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:44.428817   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:44.428874   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:44.459961   78367 cri.go:89] found id: ""
	I1213 20:24:44.459999   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.460011   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:44.460018   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:44.460068   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:44.491377   78367 cri.go:89] found id: ""
	I1213 20:24:44.491407   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.491419   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:44.491426   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:44.491488   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:44.521764   78367 cri.go:89] found id: ""
	I1213 20:24:44.521798   78367 logs.go:282] 0 containers: []
	W1213 20:24:44.521808   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:44.521819   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:44.521835   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:44.584292   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:44.584316   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:44.584328   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:44.654841   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:44.654880   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:44.689572   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:44.689598   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:44.738234   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:44.738265   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:47.250759   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:47.262717   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:47.262786   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:47.291884   78367 cri.go:89] found id: ""
	I1213 20:24:47.291910   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.291917   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:47.291923   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:47.291968   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:47.322010   78367 cri.go:89] found id: ""
	I1213 20:24:47.322036   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.322047   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:47.322056   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:47.322114   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:47.352441   78367 cri.go:89] found id: ""
	I1213 20:24:47.352470   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.352478   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:47.352483   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:47.352535   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:47.382622   78367 cri.go:89] found id: ""
	I1213 20:24:47.382646   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.382653   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:47.382659   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:47.382709   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:47.413127   78367 cri.go:89] found id: ""
	I1213 20:24:47.413149   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.413156   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:47.413161   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:47.413212   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:47.445397   78367 cri.go:89] found id: ""
	I1213 20:24:47.445423   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.445430   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:47.445435   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:47.445483   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:47.475871   78367 cri.go:89] found id: ""
	I1213 20:24:47.475897   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.475904   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:47.475910   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:47.475966   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:47.505357   78367 cri.go:89] found id: ""
	I1213 20:24:47.505382   78367 logs.go:282] 0 containers: []
	W1213 20:24:47.505389   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:47.505397   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:47.505407   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:47.568960   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:47.568982   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:47.569010   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:47.646228   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:47.646262   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:47.679590   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:47.679616   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:47.726854   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:47.726884   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:50.239188   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:50.251010   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:50.251061   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:50.281168   78367 cri.go:89] found id: ""
	I1213 20:24:50.281194   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.281204   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:50.281211   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:50.281277   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:50.310396   78367 cri.go:89] found id: ""
	I1213 20:24:50.310421   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.310431   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:50.310438   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:50.310491   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:50.340824   78367 cri.go:89] found id: ""
	I1213 20:24:50.340856   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.340866   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:50.340873   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:50.340937   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:50.377401   78367 cri.go:89] found id: ""
	I1213 20:24:50.377430   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.377437   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:50.377443   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:50.377500   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:50.406521   78367 cri.go:89] found id: ""
	I1213 20:24:50.406552   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.406562   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:50.406567   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:50.406632   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:50.440070   78367 cri.go:89] found id: ""
	I1213 20:24:50.440101   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.440112   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:50.440118   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:50.440168   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:50.473103   78367 cri.go:89] found id: ""
	I1213 20:24:50.473134   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.473145   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:50.473152   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:50.473218   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:50.503787   78367 cri.go:89] found id: ""
	I1213 20:24:50.503815   78367 logs.go:282] 0 containers: []
	W1213 20:24:50.503824   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:50.503832   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:50.503842   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:50.551379   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:50.551407   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:50.563705   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:50.563732   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:50.625016   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:50.625046   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:50.625062   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:50.717566   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:50.717601   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:53.254296   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:53.266940   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:53.266995   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:53.302975   78367 cri.go:89] found id: ""
	I1213 20:24:53.303000   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.303008   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:53.303013   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:53.303080   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:53.338434   78367 cri.go:89] found id: ""
	I1213 20:24:53.338461   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.338469   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:53.338474   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:53.338526   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:53.375117   78367 cri.go:89] found id: ""
	I1213 20:24:53.375146   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.375156   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:53.375164   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:53.375221   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:53.413376   78367 cri.go:89] found id: ""
	I1213 20:24:53.413406   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.413416   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:53.413423   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:53.413482   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:53.447697   78367 cri.go:89] found id: ""
	I1213 20:24:53.447725   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.447736   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:53.447743   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:53.447802   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:53.480987   78367 cri.go:89] found id: ""
	I1213 20:24:53.481019   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.481037   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:53.481045   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:53.481149   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:53.516573   78367 cri.go:89] found id: ""
	I1213 20:24:53.516602   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.516611   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:53.516617   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:53.516664   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:53.552098   78367 cri.go:89] found id: ""
	I1213 20:24:53.552128   78367 logs.go:282] 0 containers: []
	W1213 20:24:53.552144   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:53.552155   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:53.552168   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:53.632362   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:53.632393   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:53.667030   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:53.667061   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:53.716328   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:53.716355   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:53.730194   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:53.730219   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:53.804612   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:56.305032   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:56.317875   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:56.317934   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:56.353004   78367 cri.go:89] found id: ""
	I1213 20:24:56.353027   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.353035   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:56.353040   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:56.353086   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:56.398694   78367 cri.go:89] found id: ""
	I1213 20:24:56.398722   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.398731   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:56.398739   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:56.398800   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:56.430481   78367 cri.go:89] found id: ""
	I1213 20:24:56.430512   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.430523   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:56.430530   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:56.430589   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:56.460467   78367 cri.go:89] found id: ""
	I1213 20:24:56.460501   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.460512   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:56.460520   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:56.460583   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:56.490776   78367 cri.go:89] found id: ""
	I1213 20:24:56.490804   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.490814   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:56.490822   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:56.490889   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:56.520440   78367 cri.go:89] found id: ""
	I1213 20:24:56.520466   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.520473   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:56.520478   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:56.520525   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:56.550233   78367 cri.go:89] found id: ""
	I1213 20:24:56.550258   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.550266   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:56.550271   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:56.550347   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:56.580651   78367 cri.go:89] found id: ""
	I1213 20:24:56.580681   78367 logs.go:282] 0 containers: []
	W1213 20:24:56.580692   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:56.580703   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:56.580716   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:56.650811   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:56.650839   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:56.650892   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:56.728061   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:56.728089   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:56.767782   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:56.767809   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:56.818747   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:56.818781   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:24:59.331474   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:24:59.344319   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:24:59.344379   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:24:59.373901   78367 cri.go:89] found id: ""
	I1213 20:24:59.373931   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.373941   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:24:59.373947   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:24:59.373999   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:24:59.405800   78367 cri.go:89] found id: ""
	I1213 20:24:59.405832   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.405844   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:24:59.405851   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:24:59.405922   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:24:59.435487   78367 cri.go:89] found id: ""
	I1213 20:24:59.435517   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.435527   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:24:59.435535   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:24:59.435587   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:24:59.466466   78367 cri.go:89] found id: ""
	I1213 20:24:59.466489   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.466497   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:24:59.466502   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:24:59.466543   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:24:59.500301   78367 cri.go:89] found id: ""
	I1213 20:24:59.500330   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.500337   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:24:59.500342   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:24:59.500387   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:24:59.532614   78367 cri.go:89] found id: ""
	I1213 20:24:59.532642   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.532651   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:24:59.532658   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:24:59.532717   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:24:59.562990   78367 cri.go:89] found id: ""
	I1213 20:24:59.563013   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.563020   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:24:59.563034   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:24:59.563078   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:24:59.593335   78367 cri.go:89] found id: ""
	I1213 20:24:59.593366   78367 logs.go:282] 0 containers: []
	W1213 20:24:59.593376   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:24:59.593386   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:24:59.593401   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:24:59.659058   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:24:59.659083   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:24:59.659097   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:24:59.733569   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:24:59.733600   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:24:59.770151   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:24:59.770178   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:24:59.820506   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:24:59.820534   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:02.334083   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:02.346559   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:02.346714   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:02.380346   78367 cri.go:89] found id: ""
	I1213 20:25:02.380376   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.380384   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:02.380390   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:02.380441   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:02.412347   78367 cri.go:89] found id: ""
	I1213 20:25:02.412374   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.412385   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:02.412392   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:02.412453   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:02.443408   78367 cri.go:89] found id: ""
	I1213 20:25:02.443441   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.443453   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:02.443461   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:02.443514   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:02.474165   78367 cri.go:89] found id: ""
	I1213 20:25:02.474193   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.474201   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:02.474206   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:02.474272   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:02.505076   78367 cri.go:89] found id: ""
	I1213 20:25:02.505109   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.505121   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:02.505129   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:02.505186   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:02.541145   78367 cri.go:89] found id: ""
	I1213 20:25:02.541174   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.541182   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:02.541187   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:02.541236   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:02.579150   78367 cri.go:89] found id: ""
	I1213 20:25:02.579183   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.579194   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:02.579201   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:02.579262   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:02.611542   78367 cri.go:89] found id: ""
	I1213 20:25:02.611582   78367 logs.go:282] 0 containers: []
	W1213 20:25:02.611594   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:02.611607   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:02.611620   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:02.661145   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:02.661183   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:02.673918   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:02.673944   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:02.745321   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:02.745345   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:02.745358   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:02.820953   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:02.820992   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:05.373838   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:05.386758   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:05.386833   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:05.419177   78367 cri.go:89] found id: ""
	I1213 20:25:05.419205   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.419215   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:05.419223   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:05.419292   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:05.450595   78367 cri.go:89] found id: ""
	I1213 20:25:05.450628   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.450639   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:05.450648   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:05.450707   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:05.481818   78367 cri.go:89] found id: ""
	I1213 20:25:05.481844   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.481852   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:05.481857   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:05.481902   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:05.517195   78367 cri.go:89] found id: ""
	I1213 20:25:05.517230   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.517239   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:05.517246   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:05.517302   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:05.548698   78367 cri.go:89] found id: ""
	I1213 20:25:05.548733   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.548744   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:05.548753   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:05.548811   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:05.579983   78367 cri.go:89] found id: ""
	I1213 20:25:05.580009   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.580015   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:05.580022   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:05.580070   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:05.610660   78367 cri.go:89] found id: ""
	I1213 20:25:05.610685   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.610693   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:05.610699   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:05.610750   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:05.641572   78367 cri.go:89] found id: ""
	I1213 20:25:05.641598   78367 logs.go:282] 0 containers: []
	W1213 20:25:05.641605   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:05.641614   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:05.641625   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:05.712243   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:05.712264   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:05.712275   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:05.793232   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:05.793271   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:05.827863   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:05.827901   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:05.877641   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:05.877671   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:08.390425   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:08.402888   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:08.402944   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:08.436903   78367 cri.go:89] found id: ""
	I1213 20:25:08.436931   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.436941   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:08.436948   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:08.437005   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:08.469526   78367 cri.go:89] found id: ""
	I1213 20:25:08.469561   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.469574   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:08.469581   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:08.469644   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:08.500136   78367 cri.go:89] found id: ""
	I1213 20:25:08.500165   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.500172   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:08.500178   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:08.500223   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:08.537556   78367 cri.go:89] found id: ""
	I1213 20:25:08.537591   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.537603   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:08.537611   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:08.537669   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:08.577468   78367 cri.go:89] found id: ""
	I1213 20:25:08.577492   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.577501   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:08.577509   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:08.577566   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:08.632075   78367 cri.go:89] found id: ""
	I1213 20:25:08.632103   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.632113   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:08.632120   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:08.632178   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:08.671119   78367 cri.go:89] found id: ""
	I1213 20:25:08.671148   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.671158   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:08.671166   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:08.671225   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:08.700873   78367 cri.go:89] found id: ""
	I1213 20:25:08.700900   78367 logs.go:282] 0 containers: []
	W1213 20:25:08.700908   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:08.700916   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:08.700927   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:08.713084   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:08.713107   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:08.780299   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:08.780331   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:08.780346   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:08.851830   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:08.851865   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:08.886834   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:08.886883   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:11.435256   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:11.447096   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:11.447155   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:11.477376   78367 cri.go:89] found id: ""
	I1213 20:25:11.477403   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.477411   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:11.477416   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:11.477460   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:11.507532   78367 cri.go:89] found id: ""
	I1213 20:25:11.507564   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.507572   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:11.507582   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:11.507628   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:11.537352   78367 cri.go:89] found id: ""
	I1213 20:25:11.537383   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.537393   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:11.537400   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:11.537450   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:11.567653   78367 cri.go:89] found id: ""
	I1213 20:25:11.567681   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.567693   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:11.567700   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:11.567756   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:11.597752   78367 cri.go:89] found id: ""
	I1213 20:25:11.597782   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.597790   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:11.597795   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:11.597840   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:11.626231   78367 cri.go:89] found id: ""
	I1213 20:25:11.626258   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.626269   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:11.626276   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:11.626334   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:11.655694   78367 cri.go:89] found id: ""
	I1213 20:25:11.655724   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.655733   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:11.655740   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:11.655794   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:11.685714   78367 cri.go:89] found id: ""
	I1213 20:25:11.685742   78367 logs.go:282] 0 containers: []
	W1213 20:25:11.685750   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:11.685758   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:11.685768   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:11.733749   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:11.733774   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:11.746307   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:11.746330   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:11.807168   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:11.807190   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:11.807202   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:11.878490   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:11.878522   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:14.416516   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:14.428258   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:25:14.428339   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:25:14.458229   78367 cri.go:89] found id: ""
	I1213 20:25:14.458255   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.458263   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:25:14.458272   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:25:14.458326   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:25:14.488061   78367 cri.go:89] found id: ""
	I1213 20:25:14.488101   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.488109   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:25:14.488114   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:25:14.488159   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:25:14.516854   78367 cri.go:89] found id: ""
	I1213 20:25:14.516880   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.516888   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:25:14.516893   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:25:14.516953   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:25:14.549881   78367 cri.go:89] found id: ""
	I1213 20:25:14.549908   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.549919   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:25:14.549925   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:25:14.549982   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:25:14.579410   78367 cri.go:89] found id: ""
	I1213 20:25:14.579439   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.579449   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:25:14.579457   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:25:14.579507   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:25:14.609126   78367 cri.go:89] found id: ""
	I1213 20:25:14.609155   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.609163   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:25:14.609169   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:25:14.609216   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:25:14.638655   78367 cri.go:89] found id: ""
	I1213 20:25:14.638682   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.638689   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:25:14.638694   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:25:14.638739   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:25:14.667950   78367 cri.go:89] found id: ""
	I1213 20:25:14.667977   78367 logs.go:282] 0 containers: []
	W1213 20:25:14.667986   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:25:14.667997   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:25:14.668011   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:25:14.705223   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:25:14.705250   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:25:14.753645   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:25:14.753671   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:25:14.766082   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:25:14.766106   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:25:14.826802   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:25:14.826829   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:25:14.826841   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 20:25:17.400518   78367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 20:25:17.412464   78367 kubeadm.go:597] duration metric: took 4m2.435244002s to restartPrimaryControlPlane
	W1213 20:25:17.412536   78367 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 20:25:17.412564   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 20:25:19.422149   78367 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.009561199s)
	I1213 20:25:19.422215   78367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:25:19.435431   78367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 20:25:19.444465   78367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:25:19.452996   78367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:25:19.453011   78367 kubeadm.go:157] found existing configuration files:
	
	I1213 20:25:19.453051   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:25:19.461055   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:25:19.461096   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:25:19.469525   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:25:19.477399   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:25:19.477442   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:25:19.485719   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:25:19.493837   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:25:19.493895   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:25:19.502493   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:25:19.510479   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:25:19.510525   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:25:19.518746   78367 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 20:25:19.585664   78367 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1213 20:25:19.585781   78367 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 20:25:19.709117   78367 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 20:25:19.709242   78367 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 20:25:19.709362   78367 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 20:25:19.865449   78367 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 20:25:19.867503   78367 out.go:235]   - Generating certificates and keys ...
	I1213 20:25:19.867605   78367 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 20:25:19.867668   78367 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 20:25:19.867759   78367 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 20:25:19.867864   78367 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1213 20:25:19.867978   78367 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 20:25:19.868062   78367 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1213 20:25:19.868159   78367 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1213 20:25:19.868251   78367 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1213 20:25:19.868515   78367 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 20:25:19.868889   78367 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 20:25:19.869062   78367 kubeadm.go:310] [certs] Using the existing "sa" key
	I1213 20:25:19.869157   78367 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 20:25:19.955108   78367 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 20:25:20.380950   78367 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 20:25:20.496704   78367 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 20:25:20.598530   78367 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 20:25:20.612045   78367 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 20:25:20.613742   78367 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 20:25:20.613809   78367 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 20:25:20.733629   78367 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 20:25:20.735476   78367 out.go:235]   - Booting up control plane ...
	I1213 20:25:20.735586   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 20:25:20.739585   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 20:25:20.740414   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 20:25:20.741056   78367 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 20:25:20.743491   78367 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 20:26:00.744556   78367 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1213 20:26:00.745298   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:26:00.745523   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:26:05.746023   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:26:05.746244   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:26:15.746586   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:26:15.746767   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:26:35.747606   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:26:35.747803   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:27:15.749327   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:27:15.749616   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:27:15.749642   78367 kubeadm.go:310] 
	I1213 20:27:15.749705   78367 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1213 20:27:15.749763   78367 kubeadm.go:310] 		timed out waiting for the condition
	I1213 20:27:15.749771   78367 kubeadm.go:310] 
	I1213 20:27:15.749801   78367 kubeadm.go:310] 	This error is likely caused by:
	I1213 20:27:15.749858   78367 kubeadm.go:310] 		- The kubelet is not running
	I1213 20:27:15.749970   78367 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 20:27:15.749978   78367 kubeadm.go:310] 
	I1213 20:27:15.750116   78367 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 20:27:15.750147   78367 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1213 20:27:15.750175   78367 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1213 20:27:15.750182   78367 kubeadm.go:310] 
	I1213 20:27:15.750323   78367 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1213 20:27:15.750445   78367 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1213 20:27:15.750469   78367 kubeadm.go:310] 
	I1213 20:27:15.750594   78367 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1213 20:27:15.750679   78367 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1213 20:27:15.750750   78367 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1213 20:27:15.750838   78367 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1213 20:27:15.750867   78367 kubeadm.go:310] 
	I1213 20:27:15.751901   78367 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 20:27:15.752044   78367 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1213 20:27:15.752128   78367 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1213 20:27:15.752253   78367 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 20:27:15.752296   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 20:27:16.207985   78367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 20:27:16.221729   78367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 20:27:16.230896   78367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 20:27:16.230915   78367 kubeadm.go:157] found existing configuration files:
	
	I1213 20:27:16.230963   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 20:27:16.239780   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 20:27:16.239853   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 20:27:16.248841   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 20:27:16.257494   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 20:27:16.257547   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 20:27:16.266220   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 20:27:16.274395   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 20:27:16.274446   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 20:27:16.282941   78367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 20:27:16.291155   78367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 20:27:16.291206   78367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 20:27:16.299780   78367 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 20:27:16.492967   78367 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 20:29:12.537014   78367 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1213 20:29:12.537124   78367 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1213 20:29:12.538949   78367 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1213 20:29:12.539024   78367 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 20:29:12.539128   78367 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 20:29:12.539224   78367 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 20:29:12.539305   78367 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 20:29:12.539357   78367 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 20:29:12.540964   78367 out.go:235]   - Generating certificates and keys ...
	I1213 20:29:12.541051   78367 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 20:29:12.541164   78367 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 20:29:12.541297   78367 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 20:29:12.541385   78367 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1213 20:29:12.541510   78367 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 20:29:12.541593   78367 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1213 20:29:12.541696   78367 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1213 20:29:12.541764   78367 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1213 20:29:12.541825   78367 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 20:29:12.541886   78367 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 20:29:12.541918   78367 kubeadm.go:310] [certs] Using the existing "sa" key
	I1213 20:29:12.541993   78367 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 20:29:12.542062   78367 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 20:29:12.542141   78367 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 20:29:12.542249   78367 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 20:29:12.542337   78367 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 20:29:12.542454   78367 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 20:29:12.542564   78367 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 20:29:12.542608   78367 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 20:29:12.542689   78367 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 20:29:12.544295   78367 out.go:235]   - Booting up control plane ...
	I1213 20:29:12.544374   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 20:29:12.544440   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 20:29:12.544496   78367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 20:29:12.544566   78367 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 20:29:12.544708   78367 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 20:29:12.544763   78367 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1213 20:29:12.544822   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.544980   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545046   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.545210   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545282   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.545456   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545529   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.545681   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545742   78367 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1213 20:29:12.545910   78367 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1213 20:29:12.545920   78367 kubeadm.go:310] 
	I1213 20:29:12.545956   78367 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1213 20:29:12.545989   78367 kubeadm.go:310] 		timed out waiting for the condition
	I1213 20:29:12.545999   78367 kubeadm.go:310] 
	I1213 20:29:12.546026   78367 kubeadm.go:310] 	This error is likely caused by:
	I1213 20:29:12.546053   78367 kubeadm.go:310] 		- The kubelet is not running
	I1213 20:29:12.546145   78367 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 20:29:12.546153   78367 kubeadm.go:310] 
	I1213 20:29:12.546246   78367 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 20:29:12.546317   78367 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1213 20:29:12.546377   78367 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1213 20:29:12.546386   78367 kubeadm.go:310] 
	I1213 20:29:12.546485   78367 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1213 20:29:12.546561   78367 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1213 20:29:12.546568   78367 kubeadm.go:310] 
	I1213 20:29:12.546677   78367 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1213 20:29:12.546761   78367 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1213 20:29:12.546831   78367 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1213 20:29:12.546913   78367 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1213 20:29:12.546942   78367 kubeadm.go:310] 
	I1213 20:29:12.546976   78367 kubeadm.go:394] duration metric: took 7m57.617019103s to StartCluster
	I1213 20:29:12.547025   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 20:29:12.547089   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 20:29:12.589567   78367 cri.go:89] found id: ""
	I1213 20:29:12.589592   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.589599   78367 logs.go:284] No container was found matching "kube-apiserver"
	I1213 20:29:12.589605   78367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 20:29:12.589660   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 20:29:12.621414   78367 cri.go:89] found id: ""
	I1213 20:29:12.621438   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.621445   78367 logs.go:284] No container was found matching "etcd"
	I1213 20:29:12.621450   78367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 20:29:12.621510   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 20:29:12.652624   78367 cri.go:89] found id: ""
	I1213 20:29:12.652655   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.652666   78367 logs.go:284] No container was found matching "coredns"
	I1213 20:29:12.652674   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 20:29:12.652739   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 20:29:12.682651   78367 cri.go:89] found id: ""
	I1213 20:29:12.682683   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.682693   78367 logs.go:284] No container was found matching "kube-scheduler"
	I1213 20:29:12.682701   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 20:29:12.682767   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 20:29:12.714100   78367 cri.go:89] found id: ""
	I1213 20:29:12.714127   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.714134   78367 logs.go:284] No container was found matching "kube-proxy"
	I1213 20:29:12.714140   78367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 20:29:12.714194   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 20:29:12.745402   78367 cri.go:89] found id: ""
	I1213 20:29:12.745436   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.745446   78367 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 20:29:12.745454   78367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 20:29:12.745515   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 20:29:12.775916   78367 cri.go:89] found id: ""
	I1213 20:29:12.775942   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.775949   78367 logs.go:284] No container was found matching "kindnet"
	I1213 20:29:12.775954   78367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 20:29:12.776009   78367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 20:29:12.806128   78367 cri.go:89] found id: ""
	I1213 20:29:12.806161   78367 logs.go:282] 0 containers: []
	W1213 20:29:12.806171   78367 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 20:29:12.806183   78367 logs.go:123] Gathering logs for container status ...
	I1213 20:29:12.806197   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 20:29:12.841122   78367 logs.go:123] Gathering logs for kubelet ...
	I1213 20:29:12.841151   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 20:29:12.888169   78367 logs.go:123] Gathering logs for dmesg ...
	I1213 20:29:12.888203   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 20:29:12.900707   78367 logs.go:123] Gathering logs for describe nodes ...
	I1213 20:29:12.900733   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 20:29:12.969370   78367 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 20:29:12.969408   78367 logs.go:123] Gathering logs for CRI-O ...
	I1213 20:29:12.969423   78367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 20:29:13.074903   78367 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1213 20:29:13.074961   78367 out.go:270] * 
	W1213 20:29:13.075016   78367 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 20:29:13.075034   78367 out.go:270] * 
	W1213 20:29:13.075878   78367 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 20:29:13.079429   78367 out.go:201] 
	W1213 20:29:13.080898   78367 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 20:29:13.080953   78367 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 20:29:13.080984   78367 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 20:29:13.082622   78367 out.go:201] 
	
	
	==> CRI-O <==
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.536202963Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734122631536178477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=16c020ab-2acb-40d6-81a5-3bd9bd8d0fef name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.536578738Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b62bccf4-cf78-45c8-811d-2545a48be555 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.536681869Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b62bccf4-cf78-45c8-811d-2545a48be555 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.536716840Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b62bccf4-cf78-45c8-811d-2545a48be555 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.563458887Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=73f193c7-49d7-4e69-a526-692ec32a66df name=/runtime.v1.RuntimeService/Version
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.563540936Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=73f193c7-49d7-4e69-a526-692ec32a66df name=/runtime.v1.RuntimeService/Version
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.565063303Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ba780ca-467b-426f-b13b-648afc867abf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.565465406Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734122631565442360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ba780ca-467b-426f-b13b-648afc867abf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.566071741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a27a6ea-55de-4a54-805a-9655a4515d08 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.566131167Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a27a6ea-55de-4a54-805a-9655a4515d08 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.566167450Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5a27a6ea-55de-4a54-805a-9655a4515d08 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.594282797Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3a02d70-4bc8-4cdd-b4b6-afaed626f8d0 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.594351955Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3a02d70-4bc8-4cdd-b4b6-afaed626f8d0 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.595378409Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e9c19f0b-1810-4cbf-849c-750df1623a63 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.595793826Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734122631595772596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e9c19f0b-1810-4cbf-849c-750df1623a63 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.596344553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ee04557-1c73-4060-bec8-b6f8cf67459e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.596408237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ee04557-1c73-4060-bec8-b6f8cf67459e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.596447368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9ee04557-1c73-4060-bec8-b6f8cf67459e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.623576914Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=986b6041-2cb5-4753-8536-024352764194 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.623688448Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=986b6041-2cb5-4753-8536-024352764194 name=/runtime.v1.RuntimeService/Version
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.624710725Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3b7c120d-d6a4-4a62-a87c-4a59517f8f43 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.625066000Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734122631625044483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b7c120d-d6a4-4a62-a87c-4a59517f8f43 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.625478053Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cddfc4ef-9470-4290-83aa-c7cc1f3d7d2f name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.625526951Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cddfc4ef-9470-4290-83aa-c7cc1f3d7d2f name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 20:43:51 old-k8s-version-613355 crio[625]: time="2024-12-13 20:43:51.625556064Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cddfc4ef-9470-4290-83aa-c7cc1f3d7d2f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 20:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060967] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039950] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.018359] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.144058] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.571428] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec13 20:21] systemd-fstab-generator[552]: Ignoring "noauto" option for root device
	[  +0.064800] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055429] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.157241] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.148226] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.222516] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +6.266047] systemd-fstab-generator[871]: Ignoring "noauto" option for root device
	[  +0.062703] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.713915] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[ +12.418230] kauditd_printk_skb: 46 callbacks suppressed
	[Dec13 20:25] systemd-fstab-generator[5046]: Ignoring "noauto" option for root device
	[Dec13 20:27] systemd-fstab-generator[5322]: Ignoring "noauto" option for root device
	[  +0.061209] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:43:51 up 23 min,  0 users,  load average: 0.03, 0.06, 0.07
	Linux old-k8s-version-613355 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 13 20:43:46 old-k8s-version-613355 kubelet[7108]: internal/singleflight.(*Group).doCall(0x70c5750, 0xc00066b360, 0xc000593f80, 0x23, 0xc000775680)
	Dec 13 20:43:46 old-k8s-version-613355 kubelet[7108]:         /usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
	Dec 13 20:43:46 old-k8s-version-613355 kubelet[7108]: created by internal/singleflight.(*Group).DoChan
	Dec 13 20:43:46 old-k8s-version-613355 kubelet[7108]:         /usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
	Dec 13 20:43:46 old-k8s-version-613355 kubelet[7108]: goroutine 147 [syscall]:
	Dec 13 20:43:46 old-k8s-version-613355 kubelet[7108]: net._C2func_getaddrinfo(0xc000c0d5e0, 0x0, 0xc000525980, 0xc000b20478, 0x0, 0x0, 0x0)
	Dec 13 20:43:46 old-k8s-version-613355 kubelet[7108]:         _cgo_gotypes.go:94 +0x55
	Dec 13 20:43:46 old-k8s-version-613355 kubelet[7108]: net.cgoLookupIPCNAME.func1(0xc000c0d5e0, 0x20, 0x20, 0xc000525980, 0xc000b20478, 0x0, 0xc0006e56a0, 0x57a492)
	Dec 13 20:43:46 old-k8s-version-613355 kubelet[7108]:         /usr/local/go/src/net/cgo_unix.go:161 +0xc5
	Dec 13 20:43:46 old-k8s-version-613355 kubelet[7108]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc000593f50, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Dec 13 20:43:46 old-k8s-version-613355 kubelet[7108]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Dec 13 20:43:46 old-k8s-version-613355 kubelet[7108]: net.cgoIPLookup(0xc0002eeb40, 0x48ab5d6, 0x3, 0xc000593f50, 0x1f)
	Dec 13 20:43:46 old-k8s-version-613355 kubelet[7108]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Dec 13 20:43:46 old-k8s-version-613355 kubelet[7108]: created by net.cgoLookupIP
	Dec 13 20:43:46 old-k8s-version-613355 kubelet[7108]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Dec 13 20:43:46 old-k8s-version-613355 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 13 20:43:46 old-k8s-version-613355 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 20:43:47 old-k8s-version-613355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 172.
	Dec 13 20:43:47 old-k8s-version-613355 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 13 20:43:47 old-k8s-version-613355 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 13 20:43:47 old-k8s-version-613355 kubelet[7118]: I1213 20:43:47.119222    7118 server.go:416] Version: v1.20.0
	Dec 13 20:43:47 old-k8s-version-613355 kubelet[7118]: I1213 20:43:47.119480    7118 server.go:837] Client rotation is on, will bootstrap in background
	Dec 13 20:43:47 old-k8s-version-613355 kubelet[7118]: I1213 20:43:47.121335    7118 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 13 20:43:47 old-k8s-version-613355 kubelet[7118]: I1213 20:43:47.122385    7118 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Dec 13 20:43:47 old-k8s-version-613355 kubelet[7118]: W1213 20:43:47.122488    7118 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-613355 -n old-k8s-version-613355
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-613355 -n old-k8s-version-613355: exit status 2 (216.327783ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-613355" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (335.70s)

                                                
                                    

Test pass (277/326)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 31.29
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.2/json-events 15.38
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.24
18 TestDownloadOnly/v1.31.2/DeleteAll 0.13
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 88.98
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 133.31
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 9.47
35 TestAddons/parallel/Registry 16.24
37 TestAddons/parallel/InspektorGadget 10.69
40 TestAddons/parallel/CSI 51.64
41 TestAddons/parallel/Headlamp 22
42 TestAddons/parallel/CloudSpanner 6.55
43 TestAddons/parallel/LocalPath 57.14
44 TestAddons/parallel/NvidiaDevicePlugin 6.82
45 TestAddons/parallel/Yakd 11.7
47 TestAddons/StoppedEnableDisable 91.21
48 TestCertOptions 89.03
49 TestCertExpiration 295.81
51 TestForceSystemdFlag 75.94
52 TestForceSystemdEnv 70.44
54 TestKVMDriverInstallOrUpdate 4.64
58 TestErrorSpam/setup 42.55
59 TestErrorSpam/start 0.34
60 TestErrorSpam/status 0.7
61 TestErrorSpam/pause 1.53
62 TestErrorSpam/unpause 1.64
63 TestErrorSpam/stop 5.21
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 55.84
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 40.83
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.7
75 TestFunctional/serial/CacheCmd/cache/add_local 2.07
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
80 TestFunctional/serial/CacheCmd/cache/delete 0.09
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 33.86
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.33
86 TestFunctional/serial/LogsFileCmd 1.33
87 TestFunctional/serial/InvalidService 4.01
89 TestFunctional/parallel/ConfigCmd 0.32
90 TestFunctional/parallel/DashboardCmd 14.96
91 TestFunctional/parallel/DryRun 0.29
92 TestFunctional/parallel/InternationalLanguage 0.14
93 TestFunctional/parallel/StatusCmd 0.84
97 TestFunctional/parallel/ServiceCmdConnect 7.49
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 43.3
101 TestFunctional/parallel/SSHCmd 0.39
102 TestFunctional/parallel/CpCmd 1.27
103 TestFunctional/parallel/MySQL 21.58
104 TestFunctional/parallel/FileSync 0.2
105 TestFunctional/parallel/CertSync 1.27
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
113 TestFunctional/parallel/License 0.59
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 0.57
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.76
121 TestFunctional/parallel/ImageCommands/Setup 1.74
122 TestFunctional/parallel/ServiceCmd/DeployApp 22.29
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.65
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.38
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.25
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.57
136 TestFunctional/parallel/ImageCommands/ImageRemove 1.44
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.28
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
139 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
140 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
141 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
143 TestFunctional/parallel/ProfileCmd/profile_list 0.34
144 TestFunctional/parallel/ServiceCmd/List 0.46
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
147 TestFunctional/parallel/MountCmd/any-port 8.38
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.26
149 TestFunctional/parallel/ServiceCmd/Format 0.29
150 TestFunctional/parallel/ServiceCmd/URL 0.3
151 TestFunctional/parallel/MountCmd/specific-port 1.66
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.45
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 199.87
160 TestMultiControlPlane/serial/DeployApp 6.65
161 TestMultiControlPlane/serial/PingHostFromPods 1.09
162 TestMultiControlPlane/serial/AddWorkerNode 52.48
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.81
165 TestMultiControlPlane/serial/CopyFile 12.48
166 TestMultiControlPlane/serial/StopSecondaryNode 91.58
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.62
168 TestMultiControlPlane/serial/RestartSecondaryNode 49.67
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.83
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 431.79
171 TestMultiControlPlane/serial/DeleteSecondaryNode 17.93
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.61
173 TestMultiControlPlane/serial/StopCluster 272.85
174 TestMultiControlPlane/serial/RestartCluster 126
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.59
176 TestMultiControlPlane/serial/AddSecondaryNode 76.24
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.8
181 TestJSONOutput/start/Command 54.27
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.68
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.59
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.37
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.19
209 TestMainNoArgs 0.04
210 TestMinikubeProfile 81.04
213 TestMountStart/serial/StartWithMountFirst 27.83
214 TestMountStart/serial/VerifyMountFirst 0.36
215 TestMountStart/serial/StartWithMountSecond 27
216 TestMountStart/serial/VerifyMountSecond 0.35
217 TestMountStart/serial/DeleteFirst 0.69
218 TestMountStart/serial/VerifyMountPostDelete 0.36
219 TestMountStart/serial/Stop 1.26
220 TestMountStart/serial/RestartStopped 22.17
221 TestMountStart/serial/VerifyMountPostStop 0.37
224 TestMultiNode/serial/FreshStart2Nodes 115.74
225 TestMultiNode/serial/DeployApp2Nodes 7.22
226 TestMultiNode/serial/PingHostFrom2Pods 0.72
227 TestMultiNode/serial/AddNode 50.45
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.55
230 TestMultiNode/serial/CopyFile 6.98
231 TestMultiNode/serial/StopNode 2.25
232 TestMultiNode/serial/StartAfterStop 38.62
233 TestMultiNode/serial/RestartKeepsNodes 342.98
234 TestMultiNode/serial/DeleteNode 2.65
235 TestMultiNode/serial/StopMultiNode 182.03
236 TestMultiNode/serial/RestartMultiNode 113.88
237 TestMultiNode/serial/ValidateNameConflict 40.82
244 TestScheduledStopUnix 109.75
248 TestRunningBinaryUpgrade 210.14
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
257 TestNoKubernetes/serial/StartWithK8s 94.12
262 TestNetworkPlugins/group/false 3.36
266 TestStoppedBinaryUpgrade/Setup 2.32
267 TestStoppedBinaryUpgrade/Upgrade 133.57
268 TestNoKubernetes/serial/StartWithStopK8s 62.2
269 TestNoKubernetes/serial/Start 29.69
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
271 TestNoKubernetes/serial/ProfileList 27.35
272 TestNoKubernetes/serial/Stop 1.35
273 TestNoKubernetes/serial/StartNoArgs 22.37
274 TestStoppedBinaryUpgrade/MinikubeLogs 0.79
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
284 TestPause/serial/Start 92.24
285 TestNetworkPlugins/group/auto/Start 62.58
286 TestNetworkPlugins/group/auto/KubeletFlags 0.19
287 TestNetworkPlugins/group/auto/NetCatPod 12.22
288 TestPause/serial/SecondStartNoReconfiguration 40.15
289 TestNetworkPlugins/group/auto/DNS 0.17
290 TestNetworkPlugins/group/auto/Localhost 0.14
291 TestNetworkPlugins/group/auto/HairPin 0.14
292 TestNetworkPlugins/group/kindnet/Start 64.12
293 TestPause/serial/Pause 0.68
294 TestPause/serial/VerifyStatus 0.26
295 TestPause/serial/Unpause 0.66
296 TestPause/serial/PauseAgain 0.81
297 TestPause/serial/DeletePaused 1.05
298 TestPause/serial/VerifyDeletedResources 2.25
299 TestNetworkPlugins/group/calico/Start 77.63
300 TestNetworkPlugins/group/custom-flannel/Start 92.65
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
303 TestNetworkPlugins/group/kindnet/NetCatPod 13.24
304 TestNetworkPlugins/group/kindnet/DNS 0.19
305 TestNetworkPlugins/group/kindnet/Localhost 0.14
306 TestNetworkPlugins/group/kindnet/HairPin 0.13
307 TestNetworkPlugins/group/calico/ControllerPod 6.01
308 TestNetworkPlugins/group/enable-default-cni/Start 56.6
309 TestNetworkPlugins/group/flannel/Start 97.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.21
311 TestNetworkPlugins/group/calico/NetCatPod 12.21
312 TestNetworkPlugins/group/calico/DNS 0.13
313 TestNetworkPlugins/group/calico/Localhost 0.1
314 TestNetworkPlugins/group/calico/HairPin 0.15
315 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.2
317 TestNetworkPlugins/group/custom-flannel/DNS 0.2
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
320 TestNetworkPlugins/group/bridge/Start 101.73
323 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
324 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.23
325 TestNetworkPlugins/group/enable-default-cni/DNS 21
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
328 TestNetworkPlugins/group/flannel/ControllerPod 6.01
329 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
330 TestNetworkPlugins/group/flannel/NetCatPod 11.26
332 TestStartStop/group/embed-certs/serial/FirstStart 55.14
333 TestNetworkPlugins/group/flannel/DNS 0.13
334 TestNetworkPlugins/group/flannel/Localhost 0.13
335 TestNetworkPlugins/group/flannel/HairPin 0.12
337 TestStartStop/group/no-preload/serial/FirstStart 73.18
338 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
339 TestNetworkPlugins/group/bridge/NetCatPod 12.69
340 TestNetworkPlugins/group/bridge/DNS 0.15
341 TestNetworkPlugins/group/bridge/Localhost 0.11
342 TestNetworkPlugins/group/bridge/HairPin 0.13
343 TestStartStop/group/embed-certs/serial/DeployApp 11.29
345 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.4
346 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.07
347 TestStartStop/group/embed-certs/serial/Stop 91.26
348 TestStartStop/group/no-preload/serial/DeployApp 11.27
349 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.88
350 TestStartStop/group/no-preload/serial/Stop 91.02
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.24
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.88
353 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.53
354 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
355 TestStartStop/group/embed-certs/serial/SecondStart 296.56
356 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
357 TestStartStop/group/no-preload/serial/SecondStart 348.98
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
359 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 339.21
362 TestStartStop/group/old-k8s-version/serial/Stop 5.3
363 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
365 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
366 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
367 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
368 TestStartStop/group/embed-certs/serial/Pause 2.54
370 TestStartStop/group/newest-cni/serial/FirstStart 46.53
371 TestStartStop/group/newest-cni/serial/DeployApp 0
372 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.06
373 TestStartStop/group/newest-cni/serial/Stop 10.51
374 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
375 TestStartStop/group/newest-cni/serial/SecondStart 39.24
376 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.01
377 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 8.01
378 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
379 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
380 TestStartStop/group/no-preload/serial/Pause 2.86
381 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
382 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
384 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
385 TestStartStop/group/newest-cni/serial/Pause 4.2
386 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
387 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.7
x
+
TestDownloadOnly/v1.20.0/json-events (31.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-541042 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-541042 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (31.291031463s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (31.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1213 19:02:13.051958   19544 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1213 19:02:13.052041   19544 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-541042
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-541042: exit status 85 (55.560332ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-541042 | jenkins | v1.34.0 | 13 Dec 24 19:01 UTC |          |
	|         | -p download-only-541042        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 19:01:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 19:01:41.800660   19557 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:01:41.800751   19557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:01:41.800756   19557 out.go:358] Setting ErrFile to fd 2...
	I1213 19:01:41.800759   19557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:01:41.800946   19557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
	W1213 19:01:41.801056   19557 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20090-12353/.minikube/config/config.json: open /home/jenkins/minikube-integration/20090-12353/.minikube/config/config.json: no such file or directory
	I1213 19:01:41.801590   19557 out.go:352] Setting JSON to true
	I1213 19:01:41.802469   19557 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2645,"bootTime":1734113857,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 19:01:41.802565   19557 start.go:139] virtualization: kvm guest
	I1213 19:01:41.804779   19557 out.go:97] [download-only-541042] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1213 19:01:41.804880   19557 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball: no such file or directory
	I1213 19:01:41.804910   19557 notify.go:220] Checking for updates...
	I1213 19:01:41.806338   19557 out.go:169] MINIKUBE_LOCATION=20090
	I1213 19:01:41.807513   19557 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:01:41.808698   19557 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 19:01:41.809893   19557 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 19:01:41.811078   19557 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 19:01:41.813426   19557 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 19:01:41.813622   19557 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 19:01:41.911827   19557 out.go:97] Using the kvm2 driver based on user configuration
	I1213 19:01:41.911853   19557 start.go:297] selected driver: kvm2
	I1213 19:01:41.911859   19557 start.go:901] validating driver "kvm2" against <nil>
	I1213 19:01:41.912170   19557 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:01:41.912271   19557 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20090-12353/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 19:01:41.926409   19557 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1213 19:01:41.926448   19557 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 19:01:41.926956   19557 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1213 19:01:41.927118   19557 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 19:01:41.927145   19557 cni.go:84] Creating CNI manager for ""
	I1213 19:01:41.927194   19557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 19:01:41.927202   19557 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 19:01:41.927263   19557 start.go:340] cluster config:
	{Name:download-only-541042 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-541042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:01:41.927415   19557 iso.go:125] acquiring lock: {Name:mkd84f6661a5214d8c2d3a40ad448351a88bfd1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:01:41.928987   19557 out.go:97] Downloading VM boot image ...
	I1213 19:01:41.929013   19557 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20090-12353/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso
	I1213 19:01:55.150512   19557 out.go:97] Starting "download-only-541042" primary control-plane node in "download-only-541042" cluster
	I1213 19:01:55.150529   19557 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1213 19:01:55.244407   19557 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1213 19:01:55.244460   19557 cache.go:56] Caching tarball of preloaded images
	I1213 19:01:55.244667   19557 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1213 19:01:55.246372   19557 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1213 19:01:55.246387   19557 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1213 19:01:55.778956   19557 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1213 19:02:11.333082   19557 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1213 19:02:11.333174   19557 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-541042 host does not exist
	  To start a cluster, run: "minikube start -p download-only-541042"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-541042
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (15.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-202348 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-202348 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (15.380265103s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (15.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1213 19:02:28.739748   19544 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1213 19:02:28.739795   19544 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-202348
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-202348: exit status 85 (234.552283ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-541042 | jenkins | v1.34.0 | 13 Dec 24 19:01 UTC |                     |
	|         | -p download-only-541042        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC | 13 Dec 24 19:02 UTC |
	| delete  | -p download-only-541042        | download-only-541042 | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC | 13 Dec 24 19:02 UTC |
	| start   | -o=json --download-only        | download-only-202348 | jenkins | v1.34.0 | 13 Dec 24 19:02 UTC |                     |
	|         | -p download-only-202348        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 19:02:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 19:02:13.398287   19832 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:02:13.398549   19832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:02:13.398559   19832 out.go:358] Setting ErrFile to fd 2...
	I1213 19:02:13.398564   19832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:02:13.398774   19832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
	I1213 19:02:13.399348   19832 out.go:352] Setting JSON to true
	I1213 19:02:13.400121   19832 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2676,"bootTime":1734113857,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 19:02:13.400213   19832 start.go:139] virtualization: kvm guest
	I1213 19:02:13.402235   19832 out.go:97] [download-only-202348] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 19:02:13.402381   19832 notify.go:220] Checking for updates...
	I1213 19:02:13.403597   19832 out.go:169] MINIKUBE_LOCATION=20090
	I1213 19:02:13.404749   19832 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:02:13.405823   19832 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 19:02:13.406902   19832 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 19:02:13.408032   19832 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 19:02:13.410274   19832 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 19:02:13.410497   19832 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 19:02:13.442424   19832 out.go:97] Using the kvm2 driver based on user configuration
	I1213 19:02:13.442448   19832 start.go:297] selected driver: kvm2
	I1213 19:02:13.442453   19832 start.go:901] validating driver "kvm2" against <nil>
	I1213 19:02:13.442786   19832 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:02:13.442875   19832 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20090-12353/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 19:02:13.457217   19832 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1213 19:02:13.457258   19832 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 19:02:13.457754   19832 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1213 19:02:13.457918   19832 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 19:02:13.457946   19832 cni.go:84] Creating CNI manager for ""
	I1213 19:02:13.458003   19832 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 19:02:13.458014   19832 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 19:02:13.458074   19832 start.go:340] cluster config:
	{Name:download-only-202348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-202348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:02:13.458175   19832 iso.go:125] acquiring lock: {Name:mkd84f6661a5214d8c2d3a40ad448351a88bfd1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:02:13.459608   19832 out.go:97] Starting "download-only-202348" primary control-plane node in "download-only-202348" cluster
	I1213 19:02:13.459622   19832 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:02:13.975515   19832 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1213 19:02:13.975544   19832 cache.go:56] Caching tarball of preloaded images
	I1213 19:02:13.975766   19832 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:02:13.977553   19832 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1213 19:02:13.977578   19832 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1213 19:02:14.076887   19832 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1213 19:02:26.842890   19832 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1213 19:02:26.842973   19832 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20090-12353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1213 19:02:27.582092   19832 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1213 19:02:27.582426   19832 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/download-only-202348/config.json ...
	I1213 19:02:27.582456   19832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/download-only-202348/config.json: {Name:mk8540a00e4df54e8d51a395ad72507096c02e9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:02:27.582629   19832 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:02:27.582810   19832 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20090-12353/.minikube/cache/linux/amd64/v1.31.2/kubectl
	
	
	* The control-plane node download-only-202348 host does not exist
	  To start a cluster, run: "minikube start -p download-only-202348"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-202348
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1213 19:02:29.461625   19544 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-148435 --alsologtostderr --binary-mirror http://127.0.0.1:44529 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-148435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-148435
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (88.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-372192 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-372192 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m27.984348802s)
helpers_test.go:175: Cleaning up "offline-crio-372192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-372192
--- PASS: TestOffline (88.98s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-649719
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-649719: exit status 85 (51.946381ms)

                                                
                                                
-- stdout --
	* Profile "addons-649719" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-649719"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-649719
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-649719: exit status 85 (52.416501ms)

                                                
                                                
-- stdout --
	* Profile "addons-649719" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-649719"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (133.31s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-649719 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-649719 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m13.307592272s)
--- PASS: TestAddons/Setup (133.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-649719 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-649719 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-649719 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-649719 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [82b39ce9-4061-4ed5-bc86-ef917d598ff0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [82b39ce9-4061-4ed5-bc86-ef917d598ff0] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003779789s
addons_test.go:633: (dbg) Run:  kubectl --context addons-649719 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-649719 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-649719 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.911347ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5cc95cd69-pj78t" [ce97be6a-8047-4747-a0f2-aa19bd1ffd4e] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003191974s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-q8msp" [831a22d5-3f2d-460b-a739-1e316400aebc] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003650922s
addons_test.go:331: (dbg) Run:  kubectl --context addons-649719 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-649719 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-649719 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.482793185s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-649719 ip
2024/12/13 19:05:28 [DEBUG] GET http://192.168.39.191:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649719 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.24s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.69s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-p2kdq" [ea355240-806f-44f5-afe1-c3bfeaaf939a] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004949947s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649719 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-649719 addons disable inspektor-gadget --alsologtostderr -v=1: (5.687730495s)
--- PASS: TestAddons/parallel/InspektorGadget (10.69s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1213 19:05:29.337121   19544 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1213 19:05:29.342590   19544 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 19:05:29.342610   19544 kapi.go:107] duration metric: took 5.507512ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 5.514931ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-649719 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-649719 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [54437856-7770-4134-bdda-3aaa3e426774] Pending
helpers_test.go:344: "task-pv-pod" [54437856-7770-4134-bdda-3aaa3e426774] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [54437856-7770-4134-bdda-3aaa3e426774] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003497629s
addons_test.go:511: (dbg) Run:  kubectl --context addons-649719 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-649719 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-649719 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-649719 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-649719 delete pod task-pv-pod: (1.256415795s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-649719 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-649719 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-649719 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [825cc24c-3c7f-41c0-bf31-fc3a40ad0573] Pending
helpers_test.go:344: "task-pv-pod-restore" [825cc24c-3c7f-41c0-bf31-fc3a40ad0573] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [825cc24c-3c7f-41c0-bf31-fc3a40ad0573] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003471491s
addons_test.go:553: (dbg) Run:  kubectl --context addons-649719 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-649719 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-649719 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649719 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649719 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-649719 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.702525046s)
--- PASS: TestAddons/parallel/CSI (51.64s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-649719 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-649719 --alsologtostderr -v=1: (1.045993312s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-bvhdj" [8208d273-3c58-4659-adb3-203c8fde5563] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-bvhdj" [8208d273-3c58-4659-adb3-203c8fde5563] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.00366289s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649719 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-649719 addons disable headlamp --alsologtostderr -v=1: (5.944445402s)
--- PASS: TestAddons/parallel/Headlamp (22.00s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-d4snx" [a912add9-fe15-4bdb-8a9a-1216b538e85f] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003154229s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649719 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-649719 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-649719 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649719 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f97f0203-5ba3-43ef-94bf-f51e89d01f29] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f97f0203-5ba3-43ef-94bf-f51e89d01f29] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f97f0203-5ba3-43ef-94bf-f51e89d01f29] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.017510139s
addons_test.go:906: (dbg) Run:  kubectl --context addons-649719 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-649719 ssh "cat /opt/local-path-provisioner/pvc-71c31fc0-8ce0-4c6c-8d89-dc3684024ee5_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-649719 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-649719 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649719 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-649719 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.271543815s)
--- PASS: TestAddons/parallel/LocalPath (57.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7scc7" [9ac38625-793e-41f6-85f0-ceb6f87c9f02] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.007461785s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649719 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.82s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-md82s" [21d66e4e-e265-42f2-b511-bf3201ffc07b] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004294958s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649719 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-649719 addons disable yakd --alsologtostderr -v=1: (5.693488238s)
--- PASS: TestAddons/parallel/Yakd (11.70s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-649719
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-649719: (1m30.943270671s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-649719
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-649719
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-649719
--- PASS: TestAddons/StoppedEnableDisable (91.21s)

                                                
                                    
x
+
TestCertOptions (89.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-121610 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-121610 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m27.811388041s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-121610 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-121610 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-121610 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-121610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-121610
--- PASS: TestCertOptions (89.03s)

                                                
                                    
x
+
TestCertExpiration (295.81s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-616278 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-616278 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (53.014832341s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-616278 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-616278 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m1.153967192s)
helpers_test.go:175: Cleaning up "cert-expiration-616278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-616278
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-616278: (1.640011896s)
--- PASS: TestCertExpiration (295.81s)

                                                
                                    
x
+
TestForceSystemdFlag (75.94s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-187976 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-187976 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m14.950294492s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-187976 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-187976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-187976
--- PASS: TestForceSystemdFlag (75.94s)

                                                
                                    
x
+
TestForceSystemdEnv (70.44s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-502984 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-502984 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m9.464621723s)
helpers_test.go:175: Cleaning up "force-systemd-env-502984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-502984
--- PASS: TestForceSystemdEnv (70.44s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.64s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1213 20:04:33.994021   19544 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1213 20:04:33.994163   19544 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1213 20:04:34.020631   19544 install.go:62] docker-machine-driver-kvm2: exit status 1
W1213 20:04:34.020933   19544 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1213 20:04:34.021007   19544 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3089449910/001/docker-machine-driver-kvm2
I1213 20:04:34.247462   19544 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3089449910/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a080 0x530a080 0x530a080 0x530a080 0x530a080 0x530a080 0x530a080] Decompressors:map[bz2:0xc000787210 gz:0xc000787218 tar:0xc0007871c0 tar.bz2:0xc0007871d0 tar.gz:0xc0007871e0 tar.xz:0xc0007871f0 tar.zst:0xc000787200 tbz2:0xc0007871d0 tgz:0xc0007871e0 txz:0xc0007871f0 tzst:0xc000787200 xz:0xc000787220 zip:0xc000787230 zst:0xc000787228] Getters:map[file:0xc001eac3c0 http:0xc0008b83c0 https:0xc0008b8460] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1213 20:04:34.247499   19544 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3089449910/001/docker-machine-driver-kvm2
I1213 20:04:36.730881   19544 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1213 20:04:36.730972   19544 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1213 20:04:36.761953   19544 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1213 20:04:36.761981   19544 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1213 20:04:36.762057   19544 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1213 20:04:36.762084   19544 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3089449910/002/docker-machine-driver-kvm2
I1213 20:04:36.829755   19544 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3089449910/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a080 0x530a080 0x530a080 0x530a080 0x530a080 0x530a080 0x530a080] Decompressors:map[bz2:0xc000787210 gz:0xc000787218 tar:0xc0007871c0 tar.bz2:0xc0007871d0 tar.gz:0xc0007871e0 tar.xz:0xc0007871f0 tar.zst:0xc000787200 tbz2:0xc0007871d0 tgz:0xc0007871e0 txz:0xc0007871f0 tzst:0xc000787200 xz:0xc000787220 zip:0xc000787230 zst:0xc000787228] Getters:map[file:0xc002181250 http:0xc0006bba40 https:0xc0006bba90] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1213 20:04:36.829799   19544 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3089449910/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.64s)

                                                
                                    
x
+
TestErrorSpam/setup (42.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-781049 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-781049 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-781049 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-781049 --driver=kvm2  --container-runtime=crio: (42.550018026s)
--- PASS: TestErrorSpam/setup (42.55s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-781049 --log_dir /tmp/nospam-781049 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-781049 --log_dir /tmp/nospam-781049 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-781049 --log_dir /tmp/nospam-781049 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-781049 --log_dir /tmp/nospam-781049 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-781049 --log_dir /tmp/nospam-781049 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-781049 --log_dir /tmp/nospam-781049 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-781049 --log_dir /tmp/nospam-781049 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-781049 --log_dir /tmp/nospam-781049 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-781049 --log_dir /tmp/nospam-781049 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-781049 --log_dir /tmp/nospam-781049 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-781049 --log_dir /tmp/nospam-781049 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-781049 --log_dir /tmp/nospam-781049 unpause
--- PASS: TestErrorSpam/unpause (1.64s)

                                                
                                    
x
+
TestErrorSpam/stop (5.21s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-781049 --log_dir /tmp/nospam-781049 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-781049 --log_dir /tmp/nospam-781049 stop: (1.562808859s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-781049 --log_dir /tmp/nospam-781049 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-781049 --log_dir /tmp/nospam-781049 stop: (1.691167862s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-781049 --log_dir /tmp/nospam-781049 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-781049 --log_dir /tmp/nospam-781049 stop: (1.958499639s)
--- PASS: TestErrorSpam/stop (5.21s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20090-12353/.minikube/files/etc/test/nested/copy/19544/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.84s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-916183 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-916183 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (55.836792135s)
--- PASS: TestFunctional/serial/StartWithProxy (55.84s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1213 19:14:11.426338   19544 config.go:182] Loaded profile config "functional-916183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-916183 --alsologtostderr -v=8
E1213 19:14:44.010038   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:14:44.016484   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:14:44.027806   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:14:44.049222   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:14:44.090551   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:14:44.172658   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:14:44.334196   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:14:44.655949   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:14:45.297826   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:14:46.579402   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:14:49.141741   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-916183 --alsologtostderr -v=8: (40.831360982s)
functional_test.go:663: soft start took 40.831983519s for "functional-916183" cluster.
I1213 19:14:52.258033   19544 config.go:182] Loaded profile config "functional-916183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (40.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-916183 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-916183 cache add registry.k8s.io/pause:3.1: (1.173717203s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 cache add registry.k8s.io/pause:3.3
E1213 19:14:54.263068   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-916183 cache add registry.k8s.io/pause:3.3: (1.294950389s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-916183 cache add registry.k8s.io/pause:latest: (1.23152263s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-916183 /tmp/TestFunctionalserialCacheCmdcacheadd_local1315412143/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 cache add minikube-local-cache-test:functional-916183
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-916183 cache add minikube-local-cache-test:functional-916183: (1.753607337s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 cache delete minikube-local-cache-test:functional-916183
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-916183
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-916183 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (210.12839ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-916183 cache reload: (1.010403706s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 kubectl -- --context functional-916183 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-916183 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.86s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-916183 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 19:15:04.505129   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:15:24.987225   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-916183 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.864320854s)
functional_test.go:761: restart took 33.864451048s for "functional-916183" cluster.
I1213 19:15:34.276431   19544 config.go:182] Loaded profile config "functional-916183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (33.86s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-916183 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-916183 logs: (1.326770408s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 logs --file /tmp/TestFunctionalserialLogsFileCmd4279076818/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-916183 logs --file /tmp/TestFunctionalserialLogsFileCmd4279076818/001/logs.txt: (1.327082639s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-916183 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-916183
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-916183: exit status 115 (262.786095ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.205:30752 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-916183 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-916183 config get cpus: exit status 14 (48.928317ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-916183 config get cpus: exit status 14 (50.400694ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-916183 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-916183 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 28899: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-916183 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-916183 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.689724ms)

                                                
                                                
-- stdout --
	* [functional-916183] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:16:06.363980   28606 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:16:06.364254   28606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:16:06.364264   28606 out.go:358] Setting ErrFile to fd 2...
	I1213 19:16:06.364268   28606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:16:06.364525   28606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
	I1213 19:16:06.365135   28606 out.go:352] Setting JSON to false
	I1213 19:16:06.366053   28606 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3509,"bootTime":1734113857,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 19:16:06.366157   28606 start.go:139] virtualization: kvm guest
	I1213 19:16:06.368379   28606 out.go:177] * [functional-916183] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 19:16:06.369750   28606 notify.go:220] Checking for updates...
	I1213 19:16:06.369781   28606 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 19:16:06.371177   28606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:16:06.372819   28606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 19:16:06.374088   28606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 19:16:06.375433   28606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 19:16:06.376714   28606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:16:06.378490   28606 config.go:182] Loaded profile config "functional-916183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:16:06.379128   28606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:16:06.379223   28606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:16:06.396503   28606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33917
	I1213 19:16:06.397031   28606 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:16:06.397668   28606 main.go:141] libmachine: Using API Version  1
	I1213 19:16:06.397691   28606 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:16:06.398006   28606 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:16:06.398230   28606 main.go:141] libmachine: (functional-916183) Calling .DriverName
	I1213 19:16:06.398560   28606 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 19:16:06.399010   28606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:16:06.399052   28606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:16:06.414723   28606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35725
	I1213 19:16:06.415380   28606 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:16:06.415936   28606 main.go:141] libmachine: Using API Version  1
	I1213 19:16:06.415966   28606 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:16:06.416328   28606 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:16:06.416499   28606 main.go:141] libmachine: (functional-916183) Calling .DriverName
	I1213 19:16:06.451494   28606 out.go:177] * Using the kvm2 driver based on existing profile
	I1213 19:16:06.452843   28606 start.go:297] selected driver: kvm2
	I1213 19:16:06.452859   28606 start.go:901] validating driver "kvm2" against &{Name:functional-916183 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-916183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:16:06.452950   28606 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:16:06.455088   28606 out.go:201] 
	W1213 19:16:06.456545   28606 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 19:16:06.457691   28606 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-916183 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-916183 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-916183 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (143.824228ms)

                                                
                                                
-- stdout --
	* [functional-916183] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:16:06.566375   28668 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:16:06.566484   28668 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:16:06.566493   28668 out.go:358] Setting ErrFile to fd 2...
	I1213 19:16:06.566497   28668 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:16:06.566750   28668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
	I1213 19:16:06.567268   28668 out.go:352] Setting JSON to false
	I1213 19:16:06.568172   28668 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3510,"bootTime":1734113857,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 19:16:06.568279   28668 start.go:139] virtualization: kvm guest
	I1213 19:16:06.570287   28668 out.go:177] * [functional-916183] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1213 19:16:06.571612   28668 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 19:16:06.571614   28668 notify.go:220] Checking for updates...
	I1213 19:16:06.573834   28668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:16:06.575064   28668 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 19:16:06.576292   28668 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 19:16:06.577464   28668 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 19:16:06.578573   28668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:16:06.580152   28668 config.go:182] Loaded profile config "functional-916183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:16:06.580632   28668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:16:06.580689   28668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:16:06.596141   28668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33943
	I1213 19:16:06.596621   28668 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:16:06.597281   28668 main.go:141] libmachine: Using API Version  1
	I1213 19:16:06.597312   28668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:16:06.597739   28668 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:16:06.597983   28668 main.go:141] libmachine: (functional-916183) Calling .DriverName
	I1213 19:16:06.598224   28668 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 19:16:06.598567   28668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:16:06.598601   28668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:16:06.614251   28668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39303
	I1213 19:16:06.614729   28668 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:16:06.615204   28668 main.go:141] libmachine: Using API Version  1
	I1213 19:16:06.615227   28668 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:16:06.615559   28668 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:16:06.615722   28668 main.go:141] libmachine: (functional-916183) Calling .DriverName
	I1213 19:16:06.652433   28668 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1213 19:16:06.653526   28668 start.go:297] selected driver: kvm2
	I1213 19:16:06.653542   28668 start.go:901] validating driver "kvm2" against &{Name:functional-916183 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-916183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:16:06.653663   28668 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:16:06.655870   28668 out.go:201] 
	W1213 19:16:06.657050   28668 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 19:16:06.658184   28668 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-916183 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-916183 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-z88jr" [2641eb1c-8cb4-409f-a865-ca2be753b1fa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-z88jr" [2641eb1c-8cb4-409f-a865-ca2be753b1fa] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.005144982s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.205:30593
functional_test.go:1675: http://192.168.39.205:30593: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-z88jr

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.205:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.205:30593
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [22140469-9923-4396-ae63-3824364909d4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00378286s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-916183 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-916183 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-916183 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-916183 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8ccd554b-83c0-46d9-a02e-a3b3e684450a] Pending
helpers_test.go:344: "sp-pod" [8ccd554b-83c0-46d9-a02e-a3b3e684450a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8ccd554b-83c0-46d9-a02e-a3b3e684450a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004285985s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-916183 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-916183 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-916183 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8712a488-5a20-4051-b54c-08b7a1145cd5] Pending
helpers_test.go:344: "sp-pod" [8712a488-5a20-4051-b54c-08b7a1145cd5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8712a488-5a20-4051-b54c-08b7a1145cd5] Running
2024/12/13 19:16:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003164948s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-916183 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.30s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh -n functional-916183 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 cp functional-916183:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd910007337/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh -n functional-916183 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh -n functional-916183 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-916183 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-gqh9n" [5ef1514f-6a94-4cde-9867-35672456d419] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-gqh9n" [5ef1514f-6a94-4cde-9867-35672456d419] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.003223791s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-916183 exec mysql-6cdb49bbb-gqh9n -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-916183 exec mysql-6cdb49bbb-gqh9n -- mysql -ppassword -e "show databases;": exit status 1 (457.523305ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 19:16:01.186884   19544 retry.go:31] will retry after 688.002282ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-916183 exec mysql-6cdb49bbb-gqh9n -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-916183 exec mysql-6cdb49bbb-gqh9n -- mysql -ppassword -e "show databases;": exit status 1 (322.06419ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 19:16:02.197393   19544 retry.go:31] will retry after 794.811908ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-916183 exec mysql-6cdb49bbb-gqh9n -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.58s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/19544/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "sudo cat /etc/test/nested/copy/19544/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/19544.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "sudo cat /etc/ssl/certs/19544.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/19544.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "sudo cat /usr/share/ca-certificates/19544.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/195442.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "sudo cat /etc/ssl/certs/195442.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/195442.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "sudo cat /usr/share/ca-certificates/195442.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-916183 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-916183 ssh "sudo systemctl is-active docker": exit status 1 (223.888681ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-916183 ssh "sudo systemctl is-active containerd": exit status 1 (216.067582ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-916183 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-916183
localhost/kicbase/echo-server:functional-916183
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-916183 image ls --format short --alsologtostderr:
I1213 19:16:07.330976   28788 out.go:345] Setting OutFile to fd 1 ...
I1213 19:16:07.331084   28788 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:16:07.331092   28788 out.go:358] Setting ErrFile to fd 2...
I1213 19:16:07.331096   28788 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:16:07.331256   28788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
I1213 19:16:07.331826   28788 config.go:182] Loaded profile config "functional-916183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:16:07.331912   28788 config.go:182] Loaded profile config "functional-916183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:16:07.332275   28788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1213 19:16:07.332320   28788 main.go:141] libmachine: Launching plugin server for driver kvm2
I1213 19:16:07.347025   28788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38279
I1213 19:16:07.347456   28788 main.go:141] libmachine: () Calling .GetVersion
I1213 19:16:07.348040   28788 main.go:141] libmachine: Using API Version  1
I1213 19:16:07.348064   28788 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 19:16:07.348358   28788 main.go:141] libmachine: () Calling .GetMachineName
I1213 19:16:07.348530   28788 main.go:141] libmachine: (functional-916183) Calling .GetState
I1213 19:16:07.350330   28788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1213 19:16:07.350363   28788 main.go:141] libmachine: Launching plugin server for driver kvm2
I1213 19:16:07.366779   28788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38861
I1213 19:16:07.367244   28788 main.go:141] libmachine: () Calling .GetVersion
I1213 19:16:07.367759   28788 main.go:141] libmachine: Using API Version  1
I1213 19:16:07.367782   28788 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 19:16:07.368105   28788 main.go:141] libmachine: () Calling .GetMachineName
I1213 19:16:07.368318   28788 main.go:141] libmachine: (functional-916183) Calling .DriverName
I1213 19:16:07.368518   28788 ssh_runner.go:195] Run: systemctl --version
I1213 19:16:07.368554   28788 main.go:141] libmachine: (functional-916183) Calling .GetSSHHostname
I1213 19:16:07.371418   28788 main.go:141] libmachine: (functional-916183) DBG | domain functional-916183 has defined MAC address 52:54:00:ea:27:8a in network mk-functional-916183
I1213 19:16:07.371848   28788 main.go:141] libmachine: (functional-916183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:27:8a", ip: ""} in network mk-functional-916183: {Iface:virbr1 ExpiryTime:2024-12-13 20:13:29 +0000 UTC Type:0 Mac:52:54:00:ea:27:8a Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-916183 Clientid:01:52:54:00:ea:27:8a}
I1213 19:16:07.371876   28788 main.go:141] libmachine: (functional-916183) DBG | domain functional-916183 has defined IP address 192.168.39.205 and MAC address 52:54:00:ea:27:8a in network mk-functional-916183
I1213 19:16:07.371969   28788 main.go:141] libmachine: (functional-916183) Calling .GetSSHPort
I1213 19:16:07.372123   28788 main.go:141] libmachine: (functional-916183) Calling .GetSSHKeyPath
I1213 19:16:07.372264   28788 main.go:141] libmachine: (functional-916183) Calling .GetSSHUsername
I1213 19:16:07.372396   28788 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/functional-916183/id_rsa Username:docker}
I1213 19:16:07.471163   28788 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 19:16:07.538323   28788 main.go:141] libmachine: Making call to close driver server
I1213 19:16:07.538339   28788 main.go:141] libmachine: (functional-916183) Calling .Close
I1213 19:16:07.538616   28788 main.go:141] libmachine: Successfully made call to close driver server
I1213 19:16:07.538647   28788 main.go:141] libmachine: Making call to close connection to plugin binary
I1213 19:16:07.538663   28788 main.go:141] libmachine: Making call to close driver server
I1213 19:16:07.538669   28788 main.go:141] libmachine: (functional-916183) DBG | Closing plugin on server side
I1213 19:16:07.538672   28788 main.go:141] libmachine: (functional-916183) Calling .Close
I1213 19:16:07.538954   28788 main.go:141] libmachine: (functional-916183) DBG | Closing plugin on server side
I1213 19:16:07.538958   28788 main.go:141] libmachine: Successfully made call to close driver server
I1213 19:16:07.538978   28788 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-916183 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 66f8bdd3810c9 | 196MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-916183  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-916183  | 9d960cb238156 | 3.33kB |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-916183 image ls --format table --alsologtostderr:
I1213 19:16:07.809656   28839 out.go:345] Setting OutFile to fd 1 ...
I1213 19:16:07.809782   28839 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:16:07.809794   28839 out.go:358] Setting ErrFile to fd 2...
I1213 19:16:07.809801   28839 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:16:07.809998   28839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
I1213 19:16:07.810605   28839 config.go:182] Loaded profile config "functional-916183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:16:07.810696   28839 config.go:182] Loaded profile config "functional-916183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:16:07.811100   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1213 19:16:07.811150   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
I1213 19:16:07.826182   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44871
I1213 19:16:07.826695   28839 main.go:141] libmachine: () Calling .GetVersion
I1213 19:16:07.827195   28839 main.go:141] libmachine: Using API Version  1
I1213 19:16:07.827218   28839 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 19:16:07.827525   28839 main.go:141] libmachine: () Calling .GetMachineName
I1213 19:16:07.827664   28839 main.go:141] libmachine: (functional-916183) Calling .GetState
I1213 19:16:07.829442   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1213 19:16:07.829485   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
I1213 19:16:07.844471   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44167
I1213 19:16:07.844979   28839 main.go:141] libmachine: () Calling .GetVersion
I1213 19:16:07.845491   28839 main.go:141] libmachine: Using API Version  1
I1213 19:16:07.845535   28839 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 19:16:07.845880   28839 main.go:141] libmachine: () Calling .GetMachineName
I1213 19:16:07.846089   28839 main.go:141] libmachine: (functional-916183) Calling .DriverName
I1213 19:16:07.846320   28839 ssh_runner.go:195] Run: systemctl --version
I1213 19:16:07.846363   28839 main.go:141] libmachine: (functional-916183) Calling .GetSSHHostname
I1213 19:16:07.849160   28839 main.go:141] libmachine: (functional-916183) DBG | domain functional-916183 has defined MAC address 52:54:00:ea:27:8a in network mk-functional-916183
I1213 19:16:07.849622   28839 main.go:141] libmachine: (functional-916183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:27:8a", ip: ""} in network mk-functional-916183: {Iface:virbr1 ExpiryTime:2024-12-13 20:13:29 +0000 UTC Type:0 Mac:52:54:00:ea:27:8a Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-916183 Clientid:01:52:54:00:ea:27:8a}
I1213 19:16:07.849664   28839 main.go:141] libmachine: (functional-916183) DBG | domain functional-916183 has defined IP address 192.168.39.205 and MAC address 52:54:00:ea:27:8a in network mk-functional-916183
I1213 19:16:07.849787   28839 main.go:141] libmachine: (functional-916183) Calling .GetSSHPort
I1213 19:16:07.849957   28839 main.go:141] libmachine: (functional-916183) Calling .GetSSHKeyPath
I1213 19:16:07.850129   28839 main.go:141] libmachine: (functional-916183) Calling .GetSSHUsername
I1213 19:16:07.850336   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/functional-916183/id_rsa Username:docker}
I1213 19:16:07.941626   28839 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 19:16:08.014310   28839 main.go:141] libmachine: Making call to close driver server
I1213 19:16:08.014332   28839 main.go:141] libmachine: (functional-916183) Calling .Close
I1213 19:16:08.014578   28839 main.go:141] libmachine: Successfully made call to close driver server
I1213 19:16:08.014598   28839 main.go:141] libmachine: Making call to close connection to plugin binary
I1213 19:16:08.014614   28839 main.go:141] libmachine: Making call to close driver server
I1213 19:16:08.014623   28839 main.go:141] libmachine: (functional-916183) Calling .Close
I1213 19:16:08.014967   28839 main.go:141] libmachine: (functional-916183) DBG | Closing plugin on server side
I1213 19:16:08.015076   28839 main.go:141] libmachine: Successfully made call to close driver server
I1213 19:16:08.015089   28839 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-916183 image ls --format json --alsologtostderr:
[{"id":"9d960cb2381566c550d2386aea83cc87009b6cdf31f28d98c0c04379e4560c5e","repoDigests":["localhost/minikube-local-cache-test@sha256:390a733a910354d926ca7cd795f3ed53280c9af87a3c749241000158d7c69808"],"repoTags":["localhost/minikube-local-cache-test:functional-916183"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256
:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e","repoDigests":["docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42","docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"195919252"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoD
igests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"5107333e
08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-916183"],"size":"4943877"},{"id":"0486b6c53a1b5af2
6f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de
60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-916183 image ls --format json --alsologtostderr:
I1213 19:16:07.588741   28813 out.go:345] Setting OutFile to fd 1 ...
I1213 19:16:07.588870   28813 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:16:07.588880   28813 out.go:358] Setting ErrFile to fd 2...
I1213 19:16:07.588887   28813 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:16:07.589068   28813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
I1213 19:16:07.589684   28813 config.go:182] Loaded profile config "functional-916183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:16:07.589799   28813 config.go:182] Loaded profile config "functional-916183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:16:07.590178   28813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1213 19:16:07.590239   28813 main.go:141] libmachine: Launching plugin server for driver kvm2
I1213 19:16:07.604812   28813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34375
I1213 19:16:07.605238   28813 main.go:141] libmachine: () Calling .GetVersion
I1213 19:16:07.605849   28813 main.go:141] libmachine: Using API Version  1
I1213 19:16:07.605878   28813 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 19:16:07.606185   28813 main.go:141] libmachine: () Calling .GetMachineName
I1213 19:16:07.606372   28813 main.go:141] libmachine: (functional-916183) Calling .GetState
I1213 19:16:07.608285   28813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1213 19:16:07.608327   28813 main.go:141] libmachine: Launching plugin server for driver kvm2
I1213 19:16:07.622487   28813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43445
I1213 19:16:07.622940   28813 main.go:141] libmachine: () Calling .GetVersion
I1213 19:16:07.623415   28813 main.go:141] libmachine: Using API Version  1
I1213 19:16:07.623453   28813 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 19:16:07.623717   28813 main.go:141] libmachine: () Calling .GetMachineName
I1213 19:16:07.623870   28813 main.go:141] libmachine: (functional-916183) Calling .DriverName
I1213 19:16:07.624066   28813 ssh_runner.go:195] Run: systemctl --version
I1213 19:16:07.624101   28813 main.go:141] libmachine: (functional-916183) Calling .GetSSHHostname
I1213 19:16:07.626565   28813 main.go:141] libmachine: (functional-916183) DBG | domain functional-916183 has defined MAC address 52:54:00:ea:27:8a in network mk-functional-916183
I1213 19:16:07.626954   28813 main.go:141] libmachine: (functional-916183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:27:8a", ip: ""} in network mk-functional-916183: {Iface:virbr1 ExpiryTime:2024-12-13 20:13:29 +0000 UTC Type:0 Mac:52:54:00:ea:27:8a Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-916183 Clientid:01:52:54:00:ea:27:8a}
I1213 19:16:07.626996   28813 main.go:141] libmachine: (functional-916183) DBG | domain functional-916183 has defined IP address 192.168.39.205 and MAC address 52:54:00:ea:27:8a in network mk-functional-916183
I1213 19:16:07.627116   28813 main.go:141] libmachine: (functional-916183) Calling .GetSSHPort
I1213 19:16:07.627275   28813 main.go:141] libmachine: (functional-916183) Calling .GetSSHKeyPath
I1213 19:16:07.627465   28813 main.go:141] libmachine: (functional-916183) Calling .GetSSHUsername
I1213 19:16:07.627600   28813 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/functional-916183/id_rsa Username:docker}
I1213 19:16:07.705938   28813 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 19:16:07.758171   28813 main.go:141] libmachine: Making call to close driver server
I1213 19:16:07.758182   28813 main.go:141] libmachine: (functional-916183) Calling .Close
I1213 19:16:07.758443   28813 main.go:141] libmachine: Successfully made call to close driver server
I1213 19:16:07.758462   28813 main.go:141] libmachine: Making call to close connection to plugin binary
I1213 19:16:07.758478   28813 main.go:141] libmachine: Making call to close driver server
I1213 19:16:07.758486   28813 main.go:141] libmachine: (functional-916183) Calling .Close
I1213 19:16:07.758486   28813 main.go:141] libmachine: (functional-916183) DBG | Closing plugin on server side
I1213 19:16:07.758698   28813 main.go:141] libmachine: (functional-916183) DBG | Closing plugin on server side
I1213 19:16:07.758716   28813 main.go:141] libmachine: Successfully made call to close driver server
I1213 19:16:07.758735   28813 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-916183 image ls --format yaml --alsologtostderr:
- id: 9d960cb2381566c550d2386aea83cc87009b6cdf31f28d98c0c04379e4560c5e
repoDigests:
- localhost/minikube-local-cache-test@sha256:390a733a910354d926ca7cd795f3ed53280c9af87a3c749241000158d7c69808
repoTags:
- localhost/minikube-local-cache-test:functional-916183
size: "3330"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-916183
size: "4943877"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e
repoDigests:
- docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "195919252"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-916183 image ls --format yaml --alsologtostderr:
I1213 19:16:08.063476   28880 out.go:345] Setting OutFile to fd 1 ...
I1213 19:16:08.064087   28880 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:16:08.064141   28880 out.go:358] Setting ErrFile to fd 2...
I1213 19:16:08.064158   28880 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:16:08.064660   28880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
I1213 19:16:08.065704   28880 config.go:182] Loaded profile config "functional-916183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:16:08.065826   28880 config.go:182] Loaded profile config "functional-916183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:16:08.066162   28880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1213 19:16:08.066205   28880 main.go:141] libmachine: Launching plugin server for driver kvm2
I1213 19:16:08.082683   28880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
I1213 19:16:08.083202   28880 main.go:141] libmachine: () Calling .GetVersion
I1213 19:16:08.083762   28880 main.go:141] libmachine: Using API Version  1
I1213 19:16:08.083787   28880 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 19:16:08.084145   28880 main.go:141] libmachine: () Calling .GetMachineName
I1213 19:16:08.084338   28880 main.go:141] libmachine: (functional-916183) Calling .GetState
I1213 19:16:08.086170   28880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1213 19:16:08.086217   28880 main.go:141] libmachine: Launching plugin server for driver kvm2
I1213 19:16:08.101722   28880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33189
I1213 19:16:08.102246   28880 main.go:141] libmachine: () Calling .GetVersion
I1213 19:16:08.102902   28880 main.go:141] libmachine: Using API Version  1
I1213 19:16:08.102938   28880 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 19:16:08.103285   28880 main.go:141] libmachine: () Calling .GetMachineName
I1213 19:16:08.103526   28880 main.go:141] libmachine: (functional-916183) Calling .DriverName
I1213 19:16:08.103773   28880 ssh_runner.go:195] Run: systemctl --version
I1213 19:16:08.103797   28880 main.go:141] libmachine: (functional-916183) Calling .GetSSHHostname
I1213 19:16:08.106676   28880 main.go:141] libmachine: (functional-916183) DBG | domain functional-916183 has defined MAC address 52:54:00:ea:27:8a in network mk-functional-916183
I1213 19:16:08.107154   28880 main.go:141] libmachine: (functional-916183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:27:8a", ip: ""} in network mk-functional-916183: {Iface:virbr1 ExpiryTime:2024-12-13 20:13:29 +0000 UTC Type:0 Mac:52:54:00:ea:27:8a Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-916183 Clientid:01:52:54:00:ea:27:8a}
I1213 19:16:08.107168   28880 main.go:141] libmachine: (functional-916183) DBG | domain functional-916183 has defined IP address 192.168.39.205 and MAC address 52:54:00:ea:27:8a in network mk-functional-916183
I1213 19:16:08.107328   28880 main.go:141] libmachine: (functional-916183) Calling .GetSSHPort
I1213 19:16:08.107466   28880 main.go:141] libmachine: (functional-916183) Calling .GetSSHKeyPath
I1213 19:16:08.107583   28880 main.go:141] libmachine: (functional-916183) Calling .GetSSHUsername
I1213 19:16:08.107669   28880 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/functional-916183/id_rsa Username:docker}
I1213 19:16:08.204734   28880 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 19:16:08.260073   28880 main.go:141] libmachine: Making call to close driver server
I1213 19:16:08.260104   28880 main.go:141] libmachine: (functional-916183) Calling .Close
I1213 19:16:08.260374   28880 main.go:141] libmachine: (functional-916183) DBG | Closing plugin on server side
I1213 19:16:08.260430   28880 main.go:141] libmachine: Successfully made call to close driver server
I1213 19:16:08.260452   28880 main.go:141] libmachine: Making call to close connection to plugin binary
I1213 19:16:08.260466   28880 main.go:141] libmachine: Making call to close driver server
I1213 19:16:08.260526   28880 main.go:141] libmachine: (functional-916183) Calling .Close
I1213 19:16:08.260748   28880 main.go:141] libmachine: (functional-916183) DBG | Closing plugin on server side
I1213 19:16:08.260754   28880 main.go:141] libmachine: Successfully made call to close driver server
I1213 19:16:08.260778   28880 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-916183 ssh pgrep buildkitd: exit status 1 (225.086792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 image build -t localhost/my-image:functional-916183 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-916183 image build -t localhost/my-image:functional-916183 testdata/build --alsologtostderr: (3.261566138s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-916183 image build -t localhost/my-image:functional-916183 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a796adb26bc
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-916183
--> 6942084db7b
Successfully tagged localhost/my-image:functional-916183
6942084db7b8635ce02d3bb56980aa202b9b8617c9033afe57939c4a05d4dd7a
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-916183 image build -t localhost/my-image:functional-916183 testdata/build --alsologtostderr:
I1213 19:16:08.539788   28944 out.go:345] Setting OutFile to fd 1 ...
I1213 19:16:08.539939   28944 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:16:08.539949   28944 out.go:358] Setting ErrFile to fd 2...
I1213 19:16:08.539958   28944 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:16:08.540187   28944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
I1213 19:16:08.540775   28944 config.go:182] Loaded profile config "functional-916183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:16:08.541269   28944 config.go:182] Loaded profile config "functional-916183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:16:08.541620   28944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1213 19:16:08.541656   28944 main.go:141] libmachine: Launching plugin server for driver kvm2
I1213 19:16:08.556897   28944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41303
I1213 19:16:08.557429   28944 main.go:141] libmachine: () Calling .GetVersion
I1213 19:16:08.558010   28944 main.go:141] libmachine: Using API Version  1
I1213 19:16:08.558043   28944 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 19:16:08.558380   28944 main.go:141] libmachine: () Calling .GetMachineName
I1213 19:16:08.558583   28944 main.go:141] libmachine: (functional-916183) Calling .GetState
I1213 19:16:08.560473   28944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1213 19:16:08.560516   28944 main.go:141] libmachine: Launching plugin server for driver kvm2
I1213 19:16:08.574689   28944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43961
I1213 19:16:08.575202   28944 main.go:141] libmachine: () Calling .GetVersion
I1213 19:16:08.575623   28944 main.go:141] libmachine: Using API Version  1
I1213 19:16:08.575643   28944 main.go:141] libmachine: () Calling .SetConfigRaw
I1213 19:16:08.575951   28944 main.go:141] libmachine: () Calling .GetMachineName
I1213 19:16:08.576143   28944 main.go:141] libmachine: (functional-916183) Calling .DriverName
I1213 19:16:08.576338   28944 ssh_runner.go:195] Run: systemctl --version
I1213 19:16:08.576375   28944 main.go:141] libmachine: (functional-916183) Calling .GetSSHHostname
I1213 19:16:08.579206   28944 main.go:141] libmachine: (functional-916183) DBG | domain functional-916183 has defined MAC address 52:54:00:ea:27:8a in network mk-functional-916183
I1213 19:16:08.579584   28944 main.go:141] libmachine: (functional-916183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:27:8a", ip: ""} in network mk-functional-916183: {Iface:virbr1 ExpiryTime:2024-12-13 20:13:29 +0000 UTC Type:0 Mac:52:54:00:ea:27:8a Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:functional-916183 Clientid:01:52:54:00:ea:27:8a}
I1213 19:16:08.579609   28944 main.go:141] libmachine: (functional-916183) DBG | domain functional-916183 has defined IP address 192.168.39.205 and MAC address 52:54:00:ea:27:8a in network mk-functional-916183
I1213 19:16:08.579732   28944 main.go:141] libmachine: (functional-916183) Calling .GetSSHPort
I1213 19:16:08.579887   28944 main.go:141] libmachine: (functional-916183) Calling .GetSSHKeyPath
I1213 19:16:08.580007   28944 main.go:141] libmachine: (functional-916183) Calling .GetSSHUsername
I1213 19:16:08.580159   28944 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/functional-916183/id_rsa Username:docker}
I1213 19:16:08.661694   28944 build_images.go:161] Building image from path: /tmp/build.609474571.tar
I1213 19:16:08.661760   28944 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 19:16:08.679695   28944 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.609474571.tar
I1213 19:16:08.684534   28944 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.609474571.tar: stat -c "%s %y" /var/lib/minikube/build/build.609474571.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.609474571.tar': No such file or directory
I1213 19:16:08.684573   28944 ssh_runner.go:362] scp /tmp/build.609474571.tar --> /var/lib/minikube/build/build.609474571.tar (3072 bytes)
I1213 19:16:08.708317   28944 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.609474571
I1213 19:16:08.718160   28944 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.609474571 -xf /var/lib/minikube/build/build.609474571.tar
I1213 19:16:08.726953   28944 crio.go:315] Building image: /var/lib/minikube/build/build.609474571
I1213 19:16:08.727021   28944 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-916183 /var/lib/minikube/build/build.609474571 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1213 19:16:11.692783   28944 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-916183 /var/lib/minikube/build/build.609474571 --cgroup-manager=cgroupfs: (2.96572791s)
I1213 19:16:11.692858   28944 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.609474571
I1213 19:16:11.709257   28944 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.609474571.tar
I1213 19:16:11.748113   28944 build_images.go:217] Built localhost/my-image:functional-916183 from /tmp/build.609474571.tar
I1213 19:16:11.748150   28944 build_images.go:133] succeeded building to: functional-916183
I1213 19:16:11.748157   28944 build_images.go:134] failed building to: 
I1213 19:16:11.748178   28944 main.go:141] libmachine: Making call to close driver server
I1213 19:16:11.748194   28944 main.go:141] libmachine: (functional-916183) Calling .Close
I1213 19:16:11.748457   28944 main.go:141] libmachine: Successfully made call to close driver server
I1213 19:16:11.748472   28944 main.go:141] libmachine: Making call to close connection to plugin binary
I1213 19:16:11.748481   28944 main.go:141] libmachine: Making call to close driver server
I1213 19:16:11.748489   28944 main.go:141] libmachine: (functional-916183) Calling .Close
I1213 19:16:11.748707   28944 main.go:141] libmachine: Successfully made call to close driver server
I1213 19:16:11.748731   28944 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.720773497s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-916183
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (22.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-916183 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-916183 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-tgvzl" [cfbf7771-b690-4f65-942c-8cabc294ec6f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-tgvzl" [cfbf7771-b690-4f65-942c-8cabc294ec6f] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 22.136399634s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (22.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 image load --daemon kicbase/echo-server:functional-916183 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-916183 image load --daemon kicbase/echo-server:functional-916183 --alsologtostderr: (1.19317125s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 image load --daemon kicbase/echo-server:functional-916183 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-916183
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 image load --daemon kicbase/echo-server:functional-916183 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-916183 image load --daemon kicbase/echo-server:functional-916183 --alsologtostderr: (3.175194632s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 image save kicbase/echo-server:functional-916183 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-916183 image save kicbase/echo-server:functional-916183 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.565350478s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 image rm kicbase/echo-server:functional-916183 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-916183 image rm kicbase/echo-server:functional-916183 --alsologtostderr: (1.02128455s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-916183 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.07199186s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-916183
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 image save --daemon kicbase/echo-server:functional-916183 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-916183
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "287.672183ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "48.570434ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "294.494092ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "54.269937ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 service list -o json
functional_test.go:1494: Took "436.878843ms" to run "out/minikube-linux-amd64 -p functional-916183 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-916183 /tmp/TestFunctionalparallelMountCmdany-port2314640104/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1734117365136896174" to /tmp/TestFunctionalparallelMountCmdany-port2314640104/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1734117365136896174" to /tmp/TestFunctionalparallelMountCmdany-port2314640104/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1734117365136896174" to /tmp/TestFunctionalparallelMountCmdany-port2314640104/001/test-1734117365136896174
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-916183 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (192.909901ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 19:16:05.330118   19544 retry.go:31] will retry after 395.326077ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh -- ls -la /mount-9p
E1213 19:16:05.949565   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 19:16 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 19:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 19:16 test-1734117365136896174
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh cat /mount-9p/test-1734117365136896174
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-916183 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [18d239e3-f52f-4b8f-af8d-03c3ab26312c] Pending
helpers_test.go:344: "busybox-mount" [18d239e3-f52f-4b8f-af8d-03c3ab26312c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [18d239e3-f52f-4b8f-af8d-03c3ab26312c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [18d239e3-f52f-4b8f-af8d-03c3ab26312c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003933994s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-916183 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-916183 /tmp/TestFunctionalparallelMountCmdany-port2314640104/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.205:32015
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.205:32015
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-916183 /tmp/TestFunctionalparallelMountCmdspecific-port3947972093/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-916183 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (219.483552ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 19:16:13.741085   19544 retry.go:31] will retry after 331.98075ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-916183 /tmp/TestFunctionalparallelMountCmdspecific-port3947972093/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-916183 ssh "sudo umount -f /mount-9p": exit status 1 (275.872222ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-916183 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-916183 /tmp/TestFunctionalparallelMountCmdspecific-port3947972093/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-916183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4014455236/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-916183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4014455236/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-916183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4014455236/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-916183 ssh "findmnt -T" /mount1: exit status 1 (292.458412ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 19:16:15.478034   19544 retry.go:31] will retry after 535.443077ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-916183 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-916183 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-916183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4014455236/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-916183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4014455236/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-916183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4014455236/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-916183
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-916183
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-916183
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-829578 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1213 19:17:27.873566   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:19:44.009961   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-829578 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m19.217770278s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (199.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-829578 -- rollout status deployment/busybox: (4.705798709s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- exec busybox-7dff88458-9s9jp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- exec busybox-7dff88458-tnm9n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- exec busybox-7dff88458-vjhhm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- exec busybox-7dff88458-9s9jp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- exec busybox-7dff88458-tnm9n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- exec busybox-7dff88458-vjhhm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- exec busybox-7dff88458-9s9jp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- exec busybox-7dff88458-tnm9n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- exec busybox-7dff88458-vjhhm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- exec busybox-7dff88458-9s9jp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- exec busybox-7dff88458-9s9jp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- exec busybox-7dff88458-tnm9n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- exec busybox-7dff88458-tnm9n -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- exec busybox-7dff88458-vjhhm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-829578 -- exec busybox-7dff88458-vjhhm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (52.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-829578 -v=7 --alsologtostderr
E1213 19:20:11.716777   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:20:41.726069   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:20:41.732390   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:20:41.743908   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:20:41.765337   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:20:41.807365   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:20:41.888799   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:20:42.050387   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:20:42.372109   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:20:43.013599   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:20:44.295605   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-829578 -v=7 --alsologtostderr: (51.658665621s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 status -v=7 --alsologtostderr
E1213 19:20:46.857088   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (52.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-829578 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp testdata/cp-test.txt ha-829578:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp ha-829578:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2708022533/001/cp-test_ha-829578.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp ha-829578:/home/docker/cp-test.txt ha-829578-m02:/home/docker/cp-test_ha-829578_ha-829578-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m02 "sudo cat /home/docker/cp-test_ha-829578_ha-829578-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp ha-829578:/home/docker/cp-test.txt ha-829578-m03:/home/docker/cp-test_ha-829578_ha-829578-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m03 "sudo cat /home/docker/cp-test_ha-829578_ha-829578-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp ha-829578:/home/docker/cp-test.txt ha-829578-m04:/home/docker/cp-test_ha-829578_ha-829578-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m04 "sudo cat /home/docker/cp-test_ha-829578_ha-829578-m04.txt"
E1213 19:20:51.979288   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp testdata/cp-test.txt ha-829578-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp ha-829578-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2708022533/001/cp-test_ha-829578-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp ha-829578-m02:/home/docker/cp-test.txt ha-829578:/home/docker/cp-test_ha-829578-m02_ha-829578.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578 "sudo cat /home/docker/cp-test_ha-829578-m02_ha-829578.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp ha-829578-m02:/home/docker/cp-test.txt ha-829578-m03:/home/docker/cp-test_ha-829578-m02_ha-829578-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m03 "sudo cat /home/docker/cp-test_ha-829578-m02_ha-829578-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp ha-829578-m02:/home/docker/cp-test.txt ha-829578-m04:/home/docker/cp-test_ha-829578-m02_ha-829578-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m04 "sudo cat /home/docker/cp-test_ha-829578-m02_ha-829578-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp testdata/cp-test.txt ha-829578-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp ha-829578-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2708022533/001/cp-test_ha-829578-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp ha-829578-m03:/home/docker/cp-test.txt ha-829578:/home/docker/cp-test_ha-829578-m03_ha-829578.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578 "sudo cat /home/docker/cp-test_ha-829578-m03_ha-829578.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp ha-829578-m03:/home/docker/cp-test.txt ha-829578-m02:/home/docker/cp-test_ha-829578-m03_ha-829578-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m02 "sudo cat /home/docker/cp-test_ha-829578-m03_ha-829578-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp ha-829578-m03:/home/docker/cp-test.txt ha-829578-m04:/home/docker/cp-test_ha-829578-m03_ha-829578-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m04 "sudo cat /home/docker/cp-test_ha-829578-m03_ha-829578-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp testdata/cp-test.txt ha-829578-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp ha-829578-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2708022533/001/cp-test_ha-829578-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp ha-829578-m04:/home/docker/cp-test.txt ha-829578:/home/docker/cp-test_ha-829578-m04_ha-829578.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578 "sudo cat /home/docker/cp-test_ha-829578-m04_ha-829578.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp ha-829578-m04:/home/docker/cp-test.txt ha-829578-m02:/home/docker/cp-test_ha-829578-m04_ha-829578-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m02 "sudo cat /home/docker/cp-test_ha-829578-m04_ha-829578-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 cp ha-829578-m04:/home/docker/cp-test.txt ha-829578-m03:/home/docker/cp-test_ha-829578-m04_ha-829578-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 ssh -n ha-829578-m03 "sudo cat /home/docker/cp-test_ha-829578-m04_ha-829578-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 node stop m02 -v=7 --alsologtostderr
E1213 19:21:02.220831   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:21:22.702883   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:22:03.665021   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-829578 node stop m02 -v=7 --alsologtostderr: (1m30.970681566s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-829578 status -v=7 --alsologtostderr: exit status 7 (606.300217ms)

                                                
                                                
-- stdout --
	ha-829578
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-829578-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-829578-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-829578-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:22:31.771688   34738 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:22:31.771784   34738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:22:31.771792   34738 out.go:358] Setting ErrFile to fd 2...
	I1213 19:22:31.771796   34738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:22:31.771949   34738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
	I1213 19:22:31.772111   34738 out.go:352] Setting JSON to false
	I1213 19:22:31.772131   34738 mustload.go:65] Loading cluster: ha-829578
	I1213 19:22:31.772174   34738 notify.go:220] Checking for updates...
	I1213 19:22:31.772490   34738 config.go:182] Loaded profile config "ha-829578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:22:31.772507   34738 status.go:174] checking status of ha-829578 ...
	I1213 19:22:31.772891   34738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:22:31.772944   34738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:22:31.788203   34738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40433
	I1213 19:22:31.788688   34738 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:22:31.789270   34738 main.go:141] libmachine: Using API Version  1
	I1213 19:22:31.789310   34738 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:22:31.789689   34738 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:22:31.789866   34738 main.go:141] libmachine: (ha-829578) Calling .GetState
	I1213 19:22:31.791490   34738 status.go:371] ha-829578 host status = "Running" (err=<nil>)
	I1213 19:22:31.791515   34738 host.go:66] Checking if "ha-829578" exists ...
	I1213 19:22:31.791791   34738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:22:31.791826   34738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:22:31.806208   34738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I1213 19:22:31.806614   34738 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:22:31.807075   34738 main.go:141] libmachine: Using API Version  1
	I1213 19:22:31.807094   34738 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:22:31.807443   34738 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:22:31.807611   34738 main.go:141] libmachine: (ha-829578) Calling .GetIP
	I1213 19:22:31.810433   34738 main.go:141] libmachine: (ha-829578) DBG | domain ha-829578 has defined MAC address 52:54:00:58:5f:d9 in network mk-ha-829578
	I1213 19:22:31.810809   34738 main.go:141] libmachine: (ha-829578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:5f:d9", ip: ""} in network mk-ha-829578: {Iface:virbr1 ExpiryTime:2024-12-13 20:16:41 +0000 UTC Type:0 Mac:52:54:00:58:5f:d9 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:ha-829578 Clientid:01:52:54:00:58:5f:d9}
	I1213 19:22:31.810862   34738 main.go:141] libmachine: (ha-829578) DBG | domain ha-829578 has defined IP address 192.168.39.46 and MAC address 52:54:00:58:5f:d9 in network mk-ha-829578
	I1213 19:22:31.810989   34738 host.go:66] Checking if "ha-829578" exists ...
	I1213 19:22:31.811382   34738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:22:31.811423   34738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:22:31.825564   34738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32819
	I1213 19:22:31.825988   34738 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:22:31.826478   34738 main.go:141] libmachine: Using API Version  1
	I1213 19:22:31.826504   34738 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:22:31.826813   34738 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:22:31.827006   34738 main.go:141] libmachine: (ha-829578) Calling .DriverName
	I1213 19:22:31.827196   34738 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:22:31.827228   34738 main.go:141] libmachine: (ha-829578) Calling .GetSSHHostname
	I1213 19:22:31.829806   34738 main.go:141] libmachine: (ha-829578) DBG | domain ha-829578 has defined MAC address 52:54:00:58:5f:d9 in network mk-ha-829578
	I1213 19:22:31.830221   34738 main.go:141] libmachine: (ha-829578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:5f:d9", ip: ""} in network mk-ha-829578: {Iface:virbr1 ExpiryTime:2024-12-13 20:16:41 +0000 UTC Type:0 Mac:52:54:00:58:5f:d9 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:ha-829578 Clientid:01:52:54:00:58:5f:d9}
	I1213 19:22:31.830250   34738 main.go:141] libmachine: (ha-829578) DBG | domain ha-829578 has defined IP address 192.168.39.46 and MAC address 52:54:00:58:5f:d9 in network mk-ha-829578
	I1213 19:22:31.830379   34738 main.go:141] libmachine: (ha-829578) Calling .GetSSHPort
	I1213 19:22:31.830523   34738 main.go:141] libmachine: (ha-829578) Calling .GetSSHKeyPath
	I1213 19:22:31.830622   34738 main.go:141] libmachine: (ha-829578) Calling .GetSSHUsername
	I1213 19:22:31.830732   34738 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/ha-829578/id_rsa Username:docker}
	I1213 19:22:31.914573   34738 ssh_runner.go:195] Run: systemctl --version
	I1213 19:22:31.920942   34738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:22:31.937420   34738 kubeconfig.go:125] found "ha-829578" server: "https://192.168.39.254:8443"
	I1213 19:22:31.937455   34738 api_server.go:166] Checking apiserver status ...
	I1213 19:22:31.937498   34738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:22:31.952572   34738 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W1213 19:22:31.961459   34738 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:22:31.961511   34738 ssh_runner.go:195] Run: ls
	I1213 19:22:31.967134   34738 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1213 19:22:31.971164   34738 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1213 19:22:31.971197   34738 status.go:463] ha-829578 apiserver status = Running (err=<nil>)
	I1213 19:22:31.971208   34738 status.go:176] ha-829578 status: &{Name:ha-829578 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:22:31.971226   34738 status.go:174] checking status of ha-829578-m02 ...
	I1213 19:22:31.971637   34738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:22:31.971683   34738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:22:31.986663   34738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40743
	I1213 19:22:31.987094   34738 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:22:31.987538   34738 main.go:141] libmachine: Using API Version  1
	I1213 19:22:31.987562   34738 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:22:31.987840   34738 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:22:31.988032   34738 main.go:141] libmachine: (ha-829578-m02) Calling .GetState
	I1213 19:22:31.989407   34738 status.go:371] ha-829578-m02 host status = "Stopped" (err=<nil>)
	I1213 19:22:31.989419   34738 status.go:384] host is not running, skipping remaining checks
	I1213 19:22:31.989424   34738 status.go:176] ha-829578-m02 status: &{Name:ha-829578-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:22:31.989437   34738 status.go:174] checking status of ha-829578-m03 ...
	I1213 19:22:31.989711   34738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:22:31.989760   34738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:22:32.004221   34738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35561
	I1213 19:22:32.004651   34738 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:22:32.005135   34738 main.go:141] libmachine: Using API Version  1
	I1213 19:22:32.005159   34738 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:22:32.005512   34738 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:22:32.005725   34738 main.go:141] libmachine: (ha-829578-m03) Calling .GetState
	I1213 19:22:32.007198   34738 status.go:371] ha-829578-m03 host status = "Running" (err=<nil>)
	I1213 19:22:32.007213   34738 host.go:66] Checking if "ha-829578-m03" exists ...
	I1213 19:22:32.007560   34738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:22:32.007604   34738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:22:32.022207   34738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38197
	I1213 19:22:32.022638   34738 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:22:32.023158   34738 main.go:141] libmachine: Using API Version  1
	I1213 19:22:32.023185   34738 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:22:32.023488   34738 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:22:32.023655   34738 main.go:141] libmachine: (ha-829578-m03) Calling .GetIP
	I1213 19:22:32.025970   34738 main.go:141] libmachine: (ha-829578-m03) DBG | domain ha-829578-m03 has defined MAC address 52:54:00:46:08:2f in network mk-ha-829578
	I1213 19:22:32.026349   34738 main.go:141] libmachine: (ha-829578-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:08:2f", ip: ""} in network mk-ha-829578: {Iface:virbr1 ExpiryTime:2024-12-13 20:18:45 +0000 UTC Type:0 Mac:52:54:00:46:08:2f Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-829578-m03 Clientid:01:52:54:00:46:08:2f}
	I1213 19:22:32.026375   34738 main.go:141] libmachine: (ha-829578-m03) DBG | domain ha-829578-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:46:08:2f in network mk-ha-829578
	I1213 19:22:32.026454   34738 host.go:66] Checking if "ha-829578-m03" exists ...
	I1213 19:22:32.026745   34738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:22:32.026777   34738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:22:32.041214   34738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36953
	I1213 19:22:32.041613   34738 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:22:32.042087   34738 main.go:141] libmachine: Using API Version  1
	I1213 19:22:32.042105   34738 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:22:32.042459   34738 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:22:32.042596   34738 main.go:141] libmachine: (ha-829578-m03) Calling .DriverName
	I1213 19:22:32.042761   34738 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:22:32.042782   34738 main.go:141] libmachine: (ha-829578-m03) Calling .GetSSHHostname
	I1213 19:22:32.045299   34738 main.go:141] libmachine: (ha-829578-m03) DBG | domain ha-829578-m03 has defined MAC address 52:54:00:46:08:2f in network mk-ha-829578
	I1213 19:22:32.045679   34738 main.go:141] libmachine: (ha-829578-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:08:2f", ip: ""} in network mk-ha-829578: {Iface:virbr1 ExpiryTime:2024-12-13 20:18:45 +0000 UTC Type:0 Mac:52:54:00:46:08:2f Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-829578-m03 Clientid:01:52:54:00:46:08:2f}
	I1213 19:22:32.045702   34738 main.go:141] libmachine: (ha-829578-m03) DBG | domain ha-829578-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:46:08:2f in network mk-ha-829578
	I1213 19:22:32.045883   34738 main.go:141] libmachine: (ha-829578-m03) Calling .GetSSHPort
	I1213 19:22:32.046049   34738 main.go:141] libmachine: (ha-829578-m03) Calling .GetSSHKeyPath
	I1213 19:22:32.046181   34738 main.go:141] libmachine: (ha-829578-m03) Calling .GetSSHUsername
	I1213 19:22:32.046275   34738 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/ha-829578-m03/id_rsa Username:docker}
	I1213 19:22:32.131709   34738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:22:32.150429   34738 kubeconfig.go:125] found "ha-829578" server: "https://192.168.39.254:8443"
	I1213 19:22:32.150454   34738 api_server.go:166] Checking apiserver status ...
	I1213 19:22:32.150487   34738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:22:32.164416   34738 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup
	W1213 19:22:32.172981   34738 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:22:32.173024   34738 ssh_runner.go:195] Run: ls
	I1213 19:22:32.177158   34738 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1213 19:22:32.182377   34738 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1213 19:22:32.182396   34738 status.go:463] ha-829578-m03 apiserver status = Running (err=<nil>)
	I1213 19:22:32.182405   34738 status.go:176] ha-829578-m03 status: &{Name:ha-829578-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:22:32.182423   34738 status.go:174] checking status of ha-829578-m04 ...
	I1213 19:22:32.182763   34738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:22:32.182804   34738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:22:32.197279   34738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35295
	I1213 19:22:32.197808   34738 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:22:32.198357   34738 main.go:141] libmachine: Using API Version  1
	I1213 19:22:32.198385   34738 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:22:32.198681   34738 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:22:32.198892   34738 main.go:141] libmachine: (ha-829578-m04) Calling .GetState
	I1213 19:22:32.200290   34738 status.go:371] ha-829578-m04 host status = "Running" (err=<nil>)
	I1213 19:22:32.200304   34738 host.go:66] Checking if "ha-829578-m04" exists ...
	I1213 19:22:32.200621   34738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:22:32.200657   34738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:22:32.215696   34738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I1213 19:22:32.216070   34738 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:22:32.216491   34738 main.go:141] libmachine: Using API Version  1
	I1213 19:22:32.216517   34738 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:22:32.216862   34738 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:22:32.217041   34738 main.go:141] libmachine: (ha-829578-m04) Calling .GetIP
	I1213 19:22:32.219411   34738 main.go:141] libmachine: (ha-829578-m04) DBG | domain ha-829578-m04 has defined MAC address 52:54:00:9a:68:d9 in network mk-ha-829578
	I1213 19:22:32.219823   34738 main.go:141] libmachine: (ha-829578-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:68:d9", ip: ""} in network mk-ha-829578: {Iface:virbr1 ExpiryTime:2024-12-13 20:20:10 +0000 UTC Type:0 Mac:52:54:00:9a:68:d9 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-829578-m04 Clientid:01:52:54:00:9a:68:d9}
	I1213 19:22:32.219862   34738 main.go:141] libmachine: (ha-829578-m04) DBG | domain ha-829578-m04 has defined IP address 192.168.39.127 and MAC address 52:54:00:9a:68:d9 in network mk-ha-829578
	I1213 19:22:32.219977   34738 host.go:66] Checking if "ha-829578-m04" exists ...
	I1213 19:22:32.220294   34738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:22:32.220332   34738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:22:32.234885   34738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39831
	I1213 19:22:32.235196   34738 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:22:32.235616   34738 main.go:141] libmachine: Using API Version  1
	I1213 19:22:32.235639   34738 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:22:32.235898   34738 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:22:32.236075   34738 main.go:141] libmachine: (ha-829578-m04) Calling .DriverName
	I1213 19:22:32.236230   34738 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:22:32.236252   34738 main.go:141] libmachine: (ha-829578-m04) Calling .GetSSHHostname
	I1213 19:22:32.238563   34738 main.go:141] libmachine: (ha-829578-m04) DBG | domain ha-829578-m04 has defined MAC address 52:54:00:9a:68:d9 in network mk-ha-829578
	I1213 19:22:32.238913   34738 main.go:141] libmachine: (ha-829578-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:68:d9", ip: ""} in network mk-ha-829578: {Iface:virbr1 ExpiryTime:2024-12-13 20:20:10 +0000 UTC Type:0 Mac:52:54:00:9a:68:d9 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-829578-m04 Clientid:01:52:54:00:9a:68:d9}
	I1213 19:22:32.238953   34738 main.go:141] libmachine: (ha-829578-m04) DBG | domain ha-829578-m04 has defined IP address 192.168.39.127 and MAC address 52:54:00:9a:68:d9 in network mk-ha-829578
	I1213 19:22:32.239068   34738 main.go:141] libmachine: (ha-829578-m04) Calling .GetSSHPort
	I1213 19:22:32.239253   34738 main.go:141] libmachine: (ha-829578-m04) Calling .GetSSHKeyPath
	I1213 19:22:32.239427   34738 main.go:141] libmachine: (ha-829578-m04) Calling .GetSSHUsername
	I1213 19:22:32.239561   34738 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/ha-829578-m04/id_rsa Username:docker}
	I1213 19:22:32.317915   34738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:22:32.333374   34738 status.go:176] ha-829578-m04 status: &{Name:ha-829578-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (49.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-829578 node start m02 -v=7 --alsologtostderr: (48.757148711s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (49.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (431.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-829578 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-829578 -v=7 --alsologtostderr
E1213 19:23:25.587092   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:24:44.010320   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:25:41.726917   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:26:09.428853   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-829578 -v=7 --alsologtostderr: (4m33.931167089s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-829578 --wait=true -v=7 --alsologtostderr
E1213 19:29:44.010200   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-829578 --wait=true -v=7 --alsologtostderr: (2m37.750336391s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-829578
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (431.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 node delete m03 -v=7 --alsologtostderr
E1213 19:30:41.727057   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-829578 node delete m03 -v=7 --alsologtostderr: (17.215026479s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 stop -v=7 --alsologtostderr
E1213 19:31:07.078703   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:34:44.009778   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-829578 stop -v=7 --alsologtostderr: (4m32.750581603s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-829578 status -v=7 --alsologtostderr: exit status 7 (103.592413ms)

                                                
                                                
-- stdout --
	ha-829578
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-829578-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-829578-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:35:26.573555   38949 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:35:26.573669   38949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:35:26.573679   38949 out.go:358] Setting ErrFile to fd 2...
	I1213 19:35:26.573686   38949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:35:26.573881   38949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
	I1213 19:35:26.574074   38949 out.go:352] Setting JSON to false
	I1213 19:35:26.574100   38949 mustload.go:65] Loading cluster: ha-829578
	I1213 19:35:26.574211   38949 notify.go:220] Checking for updates...
	I1213 19:35:26.574603   38949 config.go:182] Loaded profile config "ha-829578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:35:26.574627   38949 status.go:174] checking status of ha-829578 ...
	I1213 19:35:26.575139   38949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:35:26.575184   38949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:35:26.597686   38949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33291
	I1213 19:35:26.598167   38949 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:35:26.598717   38949 main.go:141] libmachine: Using API Version  1
	I1213 19:35:26.598743   38949 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:35:26.599066   38949 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:35:26.599247   38949 main.go:141] libmachine: (ha-829578) Calling .GetState
	I1213 19:35:26.600783   38949 status.go:371] ha-829578 host status = "Stopped" (err=<nil>)
	I1213 19:35:26.600806   38949 status.go:384] host is not running, skipping remaining checks
	I1213 19:35:26.600811   38949 status.go:176] ha-829578 status: &{Name:ha-829578 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:35:26.600849   38949 status.go:174] checking status of ha-829578-m02 ...
	I1213 19:35:26.601134   38949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:35:26.601164   38949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:35:26.615503   38949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
	I1213 19:35:26.615956   38949 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:35:26.616419   38949 main.go:141] libmachine: Using API Version  1
	I1213 19:35:26.616438   38949 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:35:26.616723   38949 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:35:26.616914   38949 main.go:141] libmachine: (ha-829578-m02) Calling .GetState
	I1213 19:35:26.618196   38949 status.go:371] ha-829578-m02 host status = "Stopped" (err=<nil>)
	I1213 19:35:26.618208   38949 status.go:384] host is not running, skipping remaining checks
	I1213 19:35:26.618213   38949 status.go:176] ha-829578-m02 status: &{Name:ha-829578-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:35:26.618226   38949 status.go:174] checking status of ha-829578-m04 ...
	I1213 19:35:26.618480   38949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:35:26.618531   38949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:35:26.632210   38949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45203
	I1213 19:35:26.632508   38949 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:35:26.632865   38949 main.go:141] libmachine: Using API Version  1
	I1213 19:35:26.632882   38949 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:35:26.633168   38949 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:35:26.633314   38949 main.go:141] libmachine: (ha-829578-m04) Calling .GetState
	I1213 19:35:26.634682   38949 status.go:371] ha-829578-m04 host status = "Stopped" (err=<nil>)
	I1213 19:35:26.634695   38949 status.go:384] host is not running, skipping remaining checks
	I1213 19:35:26.634701   38949 status.go:176] ha-829578-m04 status: &{Name:ha-829578-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (126s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-829578 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1213 19:35:41.728644   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:37:04.790894   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-829578 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m5.305564204s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (126.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-829578 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-829578 --control-plane -v=7 --alsologtostderr: (1m15.432697314s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-829578 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.80s)

                                                
                                    
x
+
TestJSONOutput/start/Command (54.27s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-735019 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1213 19:39:44.011065   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-735019 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (54.274123081s)
--- PASS: TestJSONOutput/start/Command (54.27s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-735019 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-735019 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-735019 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-735019 --output=json --user=testUser: (7.371287713s)
--- PASS: TestJSONOutput/stop/Command (7.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-588170 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-588170 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.637837ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1cf78add-8d73-40d5-82e7-ae1d7666d2d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-588170] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"be0d8e09-657d-430c-a580-3f1f67a07df8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20090"}}
	{"specversion":"1.0","id":"6c5f6f60-e27c-4a4a-8385-dbe7d2423f67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"21f31746-b752-46ae-81a5-0a1e8cb03748","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig"}}
	{"specversion":"1.0","id":"c263b701-2604-4047-a49a-b88451cc8c1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube"}}
	{"specversion":"1.0","id":"539b715a-29b5-45d8-a566-2e90d0ebadde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"60c1fba3-05f4-4e3a-bd6b-f2821a65b329","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8add6d66-5527-4d99-af9d-3170df5ef246","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-588170" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-588170
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (81.04s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-956153 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-956153 --driver=kvm2  --container-runtime=crio: (38.44390026s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-965527 --driver=kvm2  --container-runtime=crio
E1213 19:40:41.728596   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-965527 --driver=kvm2  --container-runtime=crio: (39.862661828s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-956153
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-965527
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-965527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-965527
helpers_test.go:175: Cleaning up "first-956153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-956153
--- PASS: TestMinikubeProfile (81.04s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-841183 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-841183 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.834555485s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-841183 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-841183 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-870528 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-870528 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.998747443s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-870528 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-870528 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-841183 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-870528 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-870528 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-870528
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-870528: (1.26417138s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.17s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-870528
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-870528: (21.166980229s)
--- PASS: TestMountStart/serial/RestartStopped (22.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-870528 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-870528 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (115.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-352319 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-352319 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.348412215s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (115.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-352319 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-352319 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-352319 -- rollout status deployment/busybox: (5.819898248s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-352319 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-352319 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-352319 -- exec busybox-7dff88458-775xg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-352319 -- exec busybox-7dff88458-cxtm6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-352319 -- exec busybox-7dff88458-775xg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-352319 -- exec busybox-7dff88458-cxtm6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-352319 -- exec busybox-7dff88458-775xg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-352319 -- exec busybox-7dff88458-cxtm6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-352319 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-352319 -- exec busybox-7dff88458-775xg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-352319 -- exec busybox-7dff88458-775xg -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-352319 -- exec busybox-7dff88458-cxtm6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-352319 -- exec busybox-7dff88458-cxtm6 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-352319 -v 3 --alsologtostderr
E1213 19:44:44.009735   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-352319 -v 3 --alsologtostderr: (49.901199423s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.45s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-352319 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 cp testdata/cp-test.txt multinode-352319:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 ssh -n multinode-352319 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 cp multinode-352319:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1004910253/001/cp-test_multinode-352319.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 ssh -n multinode-352319 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 cp multinode-352319:/home/docker/cp-test.txt multinode-352319-m02:/home/docker/cp-test_multinode-352319_multinode-352319-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 ssh -n multinode-352319 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 ssh -n multinode-352319-m02 "sudo cat /home/docker/cp-test_multinode-352319_multinode-352319-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 cp multinode-352319:/home/docker/cp-test.txt multinode-352319-m03:/home/docker/cp-test_multinode-352319_multinode-352319-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 ssh -n multinode-352319 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 ssh -n multinode-352319-m03 "sudo cat /home/docker/cp-test_multinode-352319_multinode-352319-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 cp testdata/cp-test.txt multinode-352319-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 ssh -n multinode-352319-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 cp multinode-352319-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1004910253/001/cp-test_multinode-352319-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 ssh -n multinode-352319-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 cp multinode-352319-m02:/home/docker/cp-test.txt multinode-352319:/home/docker/cp-test_multinode-352319-m02_multinode-352319.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 ssh -n multinode-352319-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 ssh -n multinode-352319 "sudo cat /home/docker/cp-test_multinode-352319-m02_multinode-352319.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 cp multinode-352319-m02:/home/docker/cp-test.txt multinode-352319-m03:/home/docker/cp-test_multinode-352319-m02_multinode-352319-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 ssh -n multinode-352319-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 ssh -n multinode-352319-m03 "sudo cat /home/docker/cp-test_multinode-352319-m02_multinode-352319-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 cp testdata/cp-test.txt multinode-352319-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 ssh -n multinode-352319-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 cp multinode-352319-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1004910253/001/cp-test_multinode-352319-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 ssh -n multinode-352319-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 cp multinode-352319-m03:/home/docker/cp-test.txt multinode-352319:/home/docker/cp-test_multinode-352319-m03_multinode-352319.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 ssh -n multinode-352319-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 ssh -n multinode-352319 "sudo cat /home/docker/cp-test_multinode-352319-m03_multinode-352319.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 cp multinode-352319-m03:/home/docker/cp-test.txt multinode-352319-m02:/home/docker/cp-test_multinode-352319-m03_multinode-352319-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 ssh -n multinode-352319-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 ssh -n multinode-352319-m02 "sudo cat /home/docker/cp-test_multinode-352319-m03_multinode-352319-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 node stop m03
E1213 19:45:41.726405   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-352319 node stop m03: (1.433565344s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-352319 status: exit status 7 (411.089918ms)

                                                
                                                
-- stdout --
	multinode-352319
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-352319-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-352319-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-352319 status --alsologtostderr: exit status 7 (400.379539ms)

                                                
                                                
-- stdout --
	multinode-352319
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-352319-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-352319-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:45:43.254334   46746 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:45:43.254437   46746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:45:43.254450   46746 out.go:358] Setting ErrFile to fd 2...
	I1213 19:45:43.254455   46746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:45:43.254629   46746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
	I1213 19:45:43.254822   46746 out.go:352] Setting JSON to false
	I1213 19:45:43.254867   46746 mustload.go:65] Loading cluster: multinode-352319
	I1213 19:45:43.254957   46746 notify.go:220] Checking for updates...
	I1213 19:45:43.255378   46746 config.go:182] Loaded profile config "multinode-352319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:45:43.255399   46746 status.go:174] checking status of multinode-352319 ...
	I1213 19:45:43.255878   46746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:45:43.255918   46746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:45:43.271301   46746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40751
	I1213 19:45:43.271734   46746 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:45:43.272368   46746 main.go:141] libmachine: Using API Version  1
	I1213 19:45:43.272397   46746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:45:43.272706   46746 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:45:43.272879   46746 main.go:141] libmachine: (multinode-352319) Calling .GetState
	I1213 19:45:43.274374   46746 status.go:371] multinode-352319 host status = "Running" (err=<nil>)
	I1213 19:45:43.274391   46746 host.go:66] Checking if "multinode-352319" exists ...
	I1213 19:45:43.274684   46746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:45:43.274722   46746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:45:43.289444   46746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45327
	I1213 19:45:43.289792   46746 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:45:43.290220   46746 main.go:141] libmachine: Using API Version  1
	I1213 19:45:43.290242   46746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:45:43.290496   46746 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:45:43.290648   46746 main.go:141] libmachine: (multinode-352319) Calling .GetIP
	I1213 19:45:43.293212   46746 main.go:141] libmachine: (multinode-352319) DBG | domain multinode-352319 has defined MAC address 52:54:00:00:4f:a5 in network mk-multinode-352319
	I1213 19:45:43.293665   46746 main.go:141] libmachine: (multinode-352319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:a5", ip: ""} in network mk-multinode-352319: {Iface:virbr1 ExpiryTime:2024-12-13 20:42:53 +0000 UTC Type:0 Mac:52:54:00:00:4f:a5 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:multinode-352319 Clientid:01:52:54:00:00:4f:a5}
	I1213 19:45:43.293696   46746 main.go:141] libmachine: (multinode-352319) DBG | domain multinode-352319 has defined IP address 192.168.39.242 and MAC address 52:54:00:00:4f:a5 in network mk-multinode-352319
	I1213 19:45:43.293815   46746 host.go:66] Checking if "multinode-352319" exists ...
	I1213 19:45:43.294271   46746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:45:43.294317   46746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:45:43.309415   46746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45385
	I1213 19:45:43.309886   46746 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:45:43.310351   46746 main.go:141] libmachine: Using API Version  1
	I1213 19:45:43.310374   46746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:45:43.310641   46746 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:45:43.310812   46746 main.go:141] libmachine: (multinode-352319) Calling .DriverName
	I1213 19:45:43.311056   46746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:45:43.311083   46746 main.go:141] libmachine: (multinode-352319) Calling .GetSSHHostname
	I1213 19:45:43.313893   46746 main.go:141] libmachine: (multinode-352319) DBG | domain multinode-352319 has defined MAC address 52:54:00:00:4f:a5 in network mk-multinode-352319
	I1213 19:45:43.314337   46746 main.go:141] libmachine: (multinode-352319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:a5", ip: ""} in network mk-multinode-352319: {Iface:virbr1 ExpiryTime:2024-12-13 20:42:53 +0000 UTC Type:0 Mac:52:54:00:00:4f:a5 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:multinode-352319 Clientid:01:52:54:00:00:4f:a5}
	I1213 19:45:43.314363   46746 main.go:141] libmachine: (multinode-352319) DBG | domain multinode-352319 has defined IP address 192.168.39.242 and MAC address 52:54:00:00:4f:a5 in network mk-multinode-352319
	I1213 19:45:43.314489   46746 main.go:141] libmachine: (multinode-352319) Calling .GetSSHPort
	I1213 19:45:43.314657   46746 main.go:141] libmachine: (multinode-352319) Calling .GetSSHKeyPath
	I1213 19:45:43.314791   46746 main.go:141] libmachine: (multinode-352319) Calling .GetSSHUsername
	I1213 19:45:43.314935   46746 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/multinode-352319/id_rsa Username:docker}
	I1213 19:45:43.393952   46746 ssh_runner.go:195] Run: systemctl --version
	I1213 19:45:43.400472   46746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:45:43.413575   46746 kubeconfig.go:125] found "multinode-352319" server: "https://192.168.39.242:8443"
	I1213 19:45:43.413611   46746 api_server.go:166] Checking apiserver status ...
	I1213 19:45:43.413647   46746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:45:43.425690   46746 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup
	W1213 19:45:43.433700   46746 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:45:43.433757   46746 ssh_runner.go:195] Run: ls
	I1213 19:45:43.437824   46746 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I1213 19:45:43.441661   46746 api_server.go:279] https://192.168.39.242:8443/healthz returned 200:
	ok
	I1213 19:45:43.441692   46746 status.go:463] multinode-352319 apiserver status = Running (err=<nil>)
	I1213 19:45:43.441704   46746 status.go:176] multinode-352319 status: &{Name:multinode-352319 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:45:43.441728   46746 status.go:174] checking status of multinode-352319-m02 ...
	I1213 19:45:43.442005   46746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:45:43.442059   46746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:45:43.457185   46746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39449
	I1213 19:45:43.457642   46746 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:45:43.458090   46746 main.go:141] libmachine: Using API Version  1
	I1213 19:45:43.458109   46746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:45:43.458454   46746 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:45:43.458618   46746 main.go:141] libmachine: (multinode-352319-m02) Calling .GetState
	I1213 19:45:43.460216   46746 status.go:371] multinode-352319-m02 host status = "Running" (err=<nil>)
	I1213 19:45:43.460232   46746 host.go:66] Checking if "multinode-352319-m02" exists ...
	I1213 19:45:43.460517   46746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:45:43.460550   46746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:45:43.475478   46746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39119
	I1213 19:45:43.475862   46746 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:45:43.476252   46746 main.go:141] libmachine: Using API Version  1
	I1213 19:45:43.476276   46746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:45:43.476618   46746 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:45:43.476761   46746 main.go:141] libmachine: (multinode-352319-m02) Calling .GetIP
	I1213 19:45:43.479452   46746 main.go:141] libmachine: (multinode-352319-m02) DBG | domain multinode-352319-m02 has defined MAC address 52:54:00:71:d2:42 in network mk-multinode-352319
	I1213 19:45:43.479836   46746 main.go:141] libmachine: (multinode-352319-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d2:42", ip: ""} in network mk-multinode-352319: {Iface:virbr1 ExpiryTime:2024-12-13 20:43:59 +0000 UTC Type:0 Mac:52:54:00:71:d2:42 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-352319-m02 Clientid:01:52:54:00:71:d2:42}
	I1213 19:45:43.479883   46746 main.go:141] libmachine: (multinode-352319-m02) DBG | domain multinode-352319-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:71:d2:42 in network mk-multinode-352319
	I1213 19:45:43.479942   46746 host.go:66] Checking if "multinode-352319-m02" exists ...
	I1213 19:45:43.480347   46746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:45:43.480400   46746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:45:43.495062   46746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35151
	I1213 19:45:43.495537   46746 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:45:43.496134   46746 main.go:141] libmachine: Using API Version  1
	I1213 19:45:43.496159   46746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:45:43.496499   46746 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:45:43.496699   46746 main.go:141] libmachine: (multinode-352319-m02) Calling .DriverName
	I1213 19:45:43.496848   46746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:45:43.496872   46746 main.go:141] libmachine: (multinode-352319-m02) Calling .GetSSHHostname
	I1213 19:45:43.499765   46746 main.go:141] libmachine: (multinode-352319-m02) DBG | domain multinode-352319-m02 has defined MAC address 52:54:00:71:d2:42 in network mk-multinode-352319
	I1213 19:45:43.500142   46746 main.go:141] libmachine: (multinode-352319-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:d2:42", ip: ""} in network mk-multinode-352319: {Iface:virbr1 ExpiryTime:2024-12-13 20:43:59 +0000 UTC Type:0 Mac:52:54:00:71:d2:42 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-352319-m02 Clientid:01:52:54:00:71:d2:42}
	I1213 19:45:43.500169   46746 main.go:141] libmachine: (multinode-352319-m02) DBG | domain multinode-352319-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:71:d2:42 in network mk-multinode-352319
	I1213 19:45:43.500320   46746 main.go:141] libmachine: (multinode-352319-m02) Calling .GetSSHPort
	I1213 19:45:43.500482   46746 main.go:141] libmachine: (multinode-352319-m02) Calling .GetSSHKeyPath
	I1213 19:45:43.500664   46746 main.go:141] libmachine: (multinode-352319-m02) Calling .GetSSHUsername
	I1213 19:45:43.500791   46746 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20090-12353/.minikube/machines/multinode-352319-m02/id_rsa Username:docker}
	I1213 19:45:43.577490   46746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:45:43.590810   46746 status.go:176] multinode-352319-m02 status: &{Name:multinode-352319-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:45:43.590841   46746 status.go:174] checking status of multinode-352319-m03 ...
	I1213 19:45:43.591186   46746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:45:43.591231   46746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:45:43.607521   46746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42985
	I1213 19:45:43.607991   46746 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:45:43.608492   46746 main.go:141] libmachine: Using API Version  1
	I1213 19:45:43.608514   46746 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:45:43.608808   46746 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:45:43.608981   46746 main.go:141] libmachine: (multinode-352319-m03) Calling .GetState
	I1213 19:45:43.610313   46746 status.go:371] multinode-352319-m03 host status = "Stopped" (err=<nil>)
	I1213 19:45:43.610331   46746 status.go:384] host is not running, skipping remaining checks
	I1213 19:45:43.610338   46746 status.go:176] multinode-352319-m03 status: &{Name:multinode-352319-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-352319 node start m03 -v=7 --alsologtostderr: (38.028029371s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (342.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-352319
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-352319
E1213 19:47:47.080131   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-352319: (3m3.154927796s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-352319 --wait=true -v=8 --alsologtostderr
E1213 19:49:44.011473   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:50:41.726903   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-352319 --wait=true -v=8 --alsologtostderr: (2m39.735699053s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-352319
--- PASS: TestMultiNode/serial/RestartKeepsNodes (342.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-352319 node delete m03: (2.14196546s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 stop
E1213 19:53:44.792290   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:54:44.010704   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-352319 stop: (3m1.865417208s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-352319 status: exit status 7 (85.437044ms)

                                                
                                                
-- stdout --
	multinode-352319
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-352319-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-352319 status --alsologtostderr: exit status 7 (80.03077ms)

                                                
                                                
-- stdout --
	multinode-352319
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-352319-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:55:09.860408   49781 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:55:09.860514   49781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:55:09.860524   49781 out.go:358] Setting ErrFile to fd 2...
	I1213 19:55:09.860529   49781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:55:09.860680   49781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
	I1213 19:55:09.860818   49781 out.go:352] Setting JSON to false
	I1213 19:55:09.860838   49781 mustload.go:65] Loading cluster: multinode-352319
	I1213 19:55:09.860884   49781 notify.go:220] Checking for updates...
	I1213 19:55:09.861208   49781 config.go:182] Loaded profile config "multinode-352319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:55:09.861225   49781 status.go:174] checking status of multinode-352319 ...
	I1213 19:55:09.861609   49781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:55:09.861663   49781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:55:09.876187   49781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45491
	I1213 19:55:09.876695   49781 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:55:09.877199   49781 main.go:141] libmachine: Using API Version  1
	I1213 19:55:09.877219   49781 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:55:09.877587   49781 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:55:09.877794   49781 main.go:141] libmachine: (multinode-352319) Calling .GetState
	I1213 19:55:09.879398   49781 status.go:371] multinode-352319 host status = "Stopped" (err=<nil>)
	I1213 19:55:09.879415   49781 status.go:384] host is not running, skipping remaining checks
	I1213 19:55:09.879422   49781 status.go:176] multinode-352319 status: &{Name:multinode-352319 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:55:09.879455   49781 status.go:174] checking status of multinode-352319-m02 ...
	I1213 19:55:09.879862   49781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 19:55:09.879903   49781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 19:55:09.893870   49781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42825
	I1213 19:55:09.894189   49781 main.go:141] libmachine: () Calling .GetVersion
	I1213 19:55:09.894613   49781 main.go:141] libmachine: Using API Version  1
	I1213 19:55:09.894632   49781 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 19:55:09.894956   49781 main.go:141] libmachine: () Calling .GetMachineName
	I1213 19:55:09.895116   49781 main.go:141] libmachine: (multinode-352319-m02) Calling .GetState
	I1213 19:55:09.896480   49781 status.go:371] multinode-352319-m02 host status = "Stopped" (err=<nil>)
	I1213 19:55:09.896495   49781 status.go:384] host is not running, skipping remaining checks
	I1213 19:55:09.896501   49781 status.go:176] multinode-352319-m02 status: &{Name:multinode-352319-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (113.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-352319 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1213 19:55:41.726304   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-352319 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m53.361764384s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-352319 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (113.88s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-352319
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-352319-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-352319-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (60.847273ms)

                                                
                                                
-- stdout --
	* [multinode-352319-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-352319-m02' is duplicated with machine name 'multinode-352319-m02' in profile 'multinode-352319'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-352319-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-352319-m03 --driver=kvm2  --container-runtime=crio: (39.736126044s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-352319
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-352319: exit status 80 (210.381587ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-352319 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-352319-m03 already exists in multinode-352319-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-352319-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.82s)

                                                
                                    
x
+
TestScheduledStopUnix (109.75s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-622305 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-622305 --memory=2048 --driver=kvm2  --container-runtime=crio: (38.215104947s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-622305 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-622305 -n scheduled-stop-622305
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-622305 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1213 20:03:13.531396   19544 retry.go:31] will retry after 58.642µs: open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/scheduled-stop-622305/pid: no such file or directory
I1213 20:03:13.532551   19544 retry.go:31] will retry after 172.865µs: open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/scheduled-stop-622305/pid: no such file or directory
I1213 20:03:13.533720   19544 retry.go:31] will retry after 231.924µs: open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/scheduled-stop-622305/pid: no such file or directory
I1213 20:03:13.534878   19544 retry.go:31] will retry after 435.792µs: open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/scheduled-stop-622305/pid: no such file or directory
I1213 20:03:13.536007   19544 retry.go:31] will retry after 418.31µs: open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/scheduled-stop-622305/pid: no such file or directory
I1213 20:03:13.537131   19544 retry.go:31] will retry after 611.146µs: open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/scheduled-stop-622305/pid: no such file or directory
I1213 20:03:13.538253   19544 retry.go:31] will retry after 1.588703ms: open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/scheduled-stop-622305/pid: no such file or directory
I1213 20:03:13.540450   19544 retry.go:31] will retry after 2.519419ms: open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/scheduled-stop-622305/pid: no such file or directory
I1213 20:03:13.543641   19544 retry.go:31] will retry after 2.421426ms: open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/scheduled-stop-622305/pid: no such file or directory
I1213 20:03:13.546840   19544 retry.go:31] will retry after 2.692747ms: open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/scheduled-stop-622305/pid: no such file or directory
I1213 20:03:13.550076   19544 retry.go:31] will retry after 3.051034ms: open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/scheduled-stop-622305/pid: no such file or directory
I1213 20:03:13.553246   19544 retry.go:31] will retry after 10.152389ms: open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/scheduled-stop-622305/pid: no such file or directory
I1213 20:03:13.564459   19544 retry.go:31] will retry after 15.895818ms: open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/scheduled-stop-622305/pid: no such file or directory
I1213 20:03:13.580719   19544 retry.go:31] will retry after 26.124272ms: open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/scheduled-stop-622305/pid: no such file or directory
I1213 20:03:13.607973   19544 retry.go:31] will retry after 38.385239ms: open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/scheduled-stop-622305/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-622305 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-622305 -n scheduled-stop-622305
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-622305
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-622305 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-622305
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-622305: exit status 7 (64.37723ms)

                                                
                                                
-- stdout --
	scheduled-stop-622305
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-622305 -n scheduled-stop-622305
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-622305 -n scheduled-stop-622305: exit status 7 (61.189533ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-622305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-622305
--- PASS: TestScheduledStopUnix (109.75s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (210.14s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.689861289 start -p running-upgrade-176442 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1213 20:04:44.009460   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.689861289 start -p running-upgrade-176442 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m47.271029952s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-176442 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-176442 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m39.604108564s)
helpers_test.go:175: Cleaning up "running-upgrade-176442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-176442
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-176442: (1.171794716s)
--- PASS: TestRunningBinaryUpgrade (210.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-397374 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-397374 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (83.5309ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-397374] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-397374 --driver=kvm2  --container-runtime=crio
E1213 20:04:27.081761   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-397374 --driver=kvm2  --container-runtime=crio: (1m33.885019771s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-397374 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-918860 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-918860 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (101.0487ms)

                                                
                                                
-- stdout --
	* [false-918860] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 20:04:27.599100   54645 out.go:345] Setting OutFile to fd 1 ...
	I1213 20:04:27.599220   54645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:04:27.599232   54645 out.go:358] Setting ErrFile to fd 2...
	I1213 20:04:27.599239   54645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:04:27.599425   54645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-12353/.minikube/bin
	I1213 20:04:27.599958   54645 out.go:352] Setting JSON to false
	I1213 20:04:27.600795   54645 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6411,"bootTime":1734113857,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 20:04:27.600886   54645 start.go:139] virtualization: kvm guest
	I1213 20:04:27.603084   54645 out.go:177] * [false-918860] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 20:04:27.604426   54645 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 20:04:27.604493   54645 notify.go:220] Checking for updates...
	I1213 20:04:27.607051   54645 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 20:04:27.608384   54645 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-12353/kubeconfig
	I1213 20:04:27.609664   54645 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-12353/.minikube
	I1213 20:04:27.610786   54645 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 20:04:27.611942   54645 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 20:04:27.613584   54645 config.go:182] Loaded profile config "NoKubernetes-397374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:04:27.613727   54645 config.go:182] Loaded profile config "force-systemd-env-502984": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:04:27.613866   54645 config.go:182] Loaded profile config "offline-crio-372192": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:04:27.613972   54645 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 20:04:27.648513   54645 out.go:177] * Using the kvm2 driver based on user configuration
	I1213 20:04:27.649944   54645 start.go:297] selected driver: kvm2
	I1213 20:04:27.649955   54645 start.go:901] validating driver "kvm2" against <nil>
	I1213 20:04:27.649965   54645 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 20:04:27.651723   54645 out.go:201] 
	W1213 20:04:27.652756   54645 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1213 20:04:27.653799   54645 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-918860 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-918860

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-918860

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-918860

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-918860

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-918860

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-918860

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-918860

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-918860

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-918860

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-918860

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-918860

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-918860" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-918860" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-918860

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-918860"

                                                
                                                
----------------------- debugLogs end: false-918860 [took: 2.896719894s] --------------------------------
helpers_test.go:175: Cleaning up "false-918860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-918860
--- PASS: TestNetworkPlugins/group/false (3.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (133.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.532645612 start -p stopped-upgrade-154879 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.532645612 start -p stopped-upgrade-154879 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m22.729390681s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.532645612 -p stopped-upgrade-154879 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.532645612 -p stopped-upgrade-154879 stop: (1.418825918s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-154879 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-154879 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.417822663s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (133.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (62.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-397374 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-397374 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m1.167914197s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-397374 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-397374 status -o json: exit status 2 (215.129911ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-397374","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-397374
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (62.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-397374 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-397374 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.689465854s)
--- PASS: TestNoKubernetes/serial/Start (29.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-397374 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-397374 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.321773ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (27.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (13.363578062s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (13.984580545s)
--- PASS: TestNoKubernetes/serial/ProfileList (27.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-397374
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-397374: (1.351090535s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-397374 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-397374 --driver=kvm2  --container-runtime=crio: (22.374254752s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-154879
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-397374 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-397374 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.870451ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestPause/serial/Start (92.24s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-822439 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E1213 20:09:44.011070   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-822439 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m32.242439879s)
--- PASS: TestPause/serial/Start (92.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (62.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-918860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E1213 20:10:24.794180   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-918860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m2.578079885s)
--- PASS: TestNetworkPlugins/group/auto/Start (62.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-918860 "pgrep -a kubelet"
I1213 20:10:54.965192   19544 config.go:182] Loaded profile config "auto-918860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-918860 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d7pb5" [131cd53c-070e-4e03-ac74-93aaef21e265] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-d7pb5" [131cd53c-070e-4e03-ac74-93aaef21e265] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004785039s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.15s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-822439 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-822439 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.1363707s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-918860 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-918860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-918860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (64.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-918860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-918860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m4.117157533s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (64.12s)

                                                
                                    
x
+
TestPause/serial/Pause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-822439 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.68s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-822439 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-822439 --output=json --layout=cluster: exit status 2 (256.338609ms)

                                                
                                                
-- stdout --
	{"Name":"pause-822439","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-822439","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-822439 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.81s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-822439 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.81s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.05s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-822439 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-822439 --alsologtostderr -v=5: (1.047105217s)
--- PASS: TestPause/serial/DeletePaused (1.05s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.246537339s)
--- PASS: TestPause/serial/VerifyDeletedResources (2.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-918860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-918860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m17.634886569s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (92.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-918860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-918860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m32.646329809s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (92.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-kjhsv" [f526433e-e7ec-4bf0-a450-4de897172fa0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005851065s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-918860 "pgrep -a kubelet"
I1213 20:12:33.709961   19544 config.go:182] Loaded profile config "kindnet-918860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-918860 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wp4vl" [7731f300-376b-496f-85c5-42df3ee7fd6f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wp4vl" [7731f300-376b-496f-85c5-42df3ee7fd6f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.004985126s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-918860 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-918860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-918860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-g5b6z" [d8937a3e-03e1-431d-ab6e-9d94dc1e9b10] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004509984s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (56.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-918860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-918860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (56.599024632s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (56.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (97.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-918860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-918860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m37.010509784s)
--- PASS: TestNetworkPlugins/group/flannel/Start (97.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-918860 "pgrep -a kubelet"
I1213 20:13:09.324660   19544 config.go:182] Loaded profile config "calico-918860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-918860 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-glkvw" [7d534229-2f2f-48db-bd1c-260440aeb0b0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-glkvw" [7d534229-2f2f-48db-bd1c-260440aeb0b0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003952884s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-918860 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-918860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-918860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-918860 "pgrep -a kubelet"
I1213 20:13:23.942083   19544 config.go:182] Loaded profile config "custom-flannel-918860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-918860 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zsw9m" [d4216745-2dff-486c-a0fb-14d1c6a68b7f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zsw9m" [d4216745-2dff-486c-a0fb-14d1c6a68b7f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004374018s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-918860 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-918860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-918860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (101.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-918860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-918860 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m41.734005063s)
--- PASS: TestNetworkPlugins/group/bridge/Start (101.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-918860 "pgrep -a kubelet"
I1213 20:14:00.848990   19544 config.go:182] Loaded profile config "enable-default-cni-918860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-918860 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m4bkx" [5070f20e-dc07-4c0c-a9df-63818a413f26] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-m4bkx" [5070f20e-dc07-4c0c-a9df-63818a413f26] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.005852095s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-918860 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-918860 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128631792s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 20:14:28.207835   19544 retry.go:31] will retry after 728.692262ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-918860 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context enable-default-cni-918860 exec deployment/netcat -- nslookup kubernetes.default: (5.141699469s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (21.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-918860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-918860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-f9ft8" [69793bc8-0986-4e6d-b781-42a8f9aadfb4] Running
E1213 20:14:44.009634   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/addons-649719/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005444624s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-918860 "pgrep -a kubelet"
I1213 20:14:47.833118   19544 config.go:182] Loaded profile config "flannel-918860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-918860 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sqtbm" [3361c56a-de0b-4a6a-ba49-9846910225c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sqtbm" [3361c56a-de0b-4a6a-ba49-9846910225c0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00450896s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (55.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-191190 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-191190 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (55.141643954s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (55.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-918860 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-918860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-918860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-475934 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-475934 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m13.184059238s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-918860 "pgrep -a kubelet"
I1213 20:15:21.684894   19544 config.go:182] Loaded profile config "bridge-918860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-918860 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context bridge-918860 replace --force -f testdata/netcat-deployment.yaml: (1.650726131s)
I1213 20:15:23.348146   19544 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ptjvq" [9f13012f-897b-4f8e-9181-2cb2ba3fdaae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ptjvq" [9f13012f-897b-4f8e-9181-2cb2ba3fdaae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004244671s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-918860 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-918860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-918860 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-191190 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [98f41268-c72e-45ea-bf7d-28d4d1c5870b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [98f41268-c72e-45ea-bf7d-28d4d1c5870b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.0036923s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-191190 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-355668 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1213 20:15:55.165507   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:15:55.171967   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:15:55.183410   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:15:55.204820   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:15:55.246219   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:15:55.327642   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:15:55.489250   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-355668 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (56.403967143s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-191190 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1213 20:15:55.811061   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:15:56.452942   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-191190 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-191190 --alsologtostderr -v=3
E1213 20:15:57.734388   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:16:00.296507   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:16:05.418766   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:16:15.660607   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-191190 --alsologtostderr -v=3: (1m31.264485952s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-475934 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [342144b3-7b81-4026-a702-0ff2e36789f2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [342144b3-7b81-4026-a702-0ff2e36789f2] Running
E1213 20:16:36.142369   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.003544118s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-475934 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-475934 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-475934 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-475934 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-475934 --alsologtostderr -v=3: (1m31.019170138s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-355668 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [00be91bc-86f6-47d3-9014-e345d6f6ce59] Pending
helpers_test.go:344: "busybox" [00be91bc-86f6-47d3-9014-e345d6f6ce59] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [00be91bc-86f6-47d3-9014-e345d6f6ce59] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003852977s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-355668 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-355668 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-355668 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-355668 --alsologtostderr -v=3
E1213 20:17:17.104247   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:17:27.499294   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:17:27.505647   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:17:27.516968   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:17:27.538296   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:17:27.579657   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:17:27.661081   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:17:27.822616   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-355668 --alsologtostderr -v=3: (1m31.532504987s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-191190 -n embed-certs-191190
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-191190 -n embed-certs-191190: exit status 7 (60.694545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-191190 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1213 20:17:28.144050   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (296.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-191190 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1213 20:17:28.786154   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:17:30.067554   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:17:32.629244   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:17:37.751212   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:17:47.993011   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:03.112580   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:03.118976   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:03.130357   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:03.151788   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:03.193193   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:03.274477   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:03.435965   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:03.757958   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:04.399353   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:05.680713   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:08.242320   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:08.474961   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-191190 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (4m56.315044766s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-191190 -n embed-certs-191190
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (296.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-475934 -n no-preload-475934
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-475934 -n no-preload-475934: exit status 7 (65.85618ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-475934 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (348.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-475934 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1213 20:18:13.364217   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:23.606062   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:24.127882   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:24.134296   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:24.145671   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:24.167086   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:24.208470   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:24.289906   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:24.452086   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:24.773794   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:25.415886   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:26.697395   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:29.259593   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-475934 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (5m48.606707036s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-475934 -n no-preload-475934
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (348.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-355668 -n default-k8s-diff-port-355668
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-355668 -n default-k8s-diff-port-355668: exit status 7 (64.048689ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-355668 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (339.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-355668 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1213 20:18:34.381500   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:39.026514   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/auto-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:44.087335   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:44.623264   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:49.437027   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-355668 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (5m38.824739996s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-355668 -n default-k8s-diff-port-355668
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (339.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-613355 --alsologtostderr -v=3
E1213 20:20:41.726912   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/functional-916183/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:20:43.832099   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-613355 --alsologtostderr -v=3: (5.297232737s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-613355 -n old-k8s-version-613355
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-613355 -n old-k8s-version-613355: exit status 7 (66.264249ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-613355 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dcwlp" [b030ee44-5298-41e3-a5d8-54e65814439b] Running
E1213 20:22:25.459020   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:22:27.499549   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003744961s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dcwlp" [b030ee44-5298-41e3-a5d8-54e65814439b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004532744s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-191190 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-191190 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-191190 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-191190 -n embed-certs-191190
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-191190 -n embed-certs-191190: exit status 2 (236.406311ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-191190 -n embed-certs-191190
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-191190 -n embed-certs-191190: exit status 2 (234.639096ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-191190 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-191190 -n embed-certs-191190
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-191190 -n embed-certs-191190
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-535459 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1213 20:22:55.201232   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/kindnet-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:23:03.111866   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:23:07.197196   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/bridge-918860/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:23:24.126998   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-535459 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (46.526989132s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-535459 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-535459 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.057716045s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-535459 --alsologtostderr -v=3
E1213 20:23:30.812567   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/calico-918860/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-535459 --alsologtostderr -v=3: (10.512999003s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-535459 -n newest-cni-535459
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-535459 -n newest-cni-535459: exit status 7 (64.727274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-535459 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (39.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-535459 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1213 20:23:51.830231   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/custom-flannel-918860/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-535459 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (38.791863159s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-535459 -n newest-cni-535459
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (39.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jbm4d" [10f59fe3-e168-40c9-9ae9-be0043a051c9] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1213 20:24:01.060148   19544 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-12353/.minikube/profiles/enable-default-cni-918860/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jbm4d" [10f59fe3-e168-40c9-9ae9-be0043a051c9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.005234188s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4qdq4" [cbcbc3f6-17a6-4924-8368-67f187a6340d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4qdq4" [cbcbc3f6-17a6-4924-8368-67f187a6340d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.005813337s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jbm4d" [10f59fe3-e168-40c9-9ae9-be0043a051c9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004410821s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-475934 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-475934 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-475934 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-475934 -n no-preload-475934
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-475934 -n no-preload-475934: exit status 2 (257.39851ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-475934 -n no-preload-475934
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-475934 -n no-preload-475934: exit status 2 (246.946165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-475934 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-475934 -n no-preload-475934
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-475934 -n no-preload-475934
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4qdq4" [cbcbc3f6-17a6-4924-8368-67f187a6340d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005828919s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-355668 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-535459 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-535459 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-535459 --alsologtostderr -v=1: (1.779504821s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-535459 -n newest-cni-535459
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-535459 -n newest-cni-535459: exit status 2 (388.734583ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-535459 -n newest-cni-535459
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-535459 -n newest-cni-535459: exit status 2 (272.186734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-535459 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-535459 --alsologtostderr -v=1: (1.076656292s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-535459 -n newest-cni-535459
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-535459 -n newest-cni-535459
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-355668 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-355668 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-355668 --alsologtostderr -v=1: (1.018803269s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-355668 -n default-k8s-diff-port-355668
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-355668 -n default-k8s-diff-port-355668: exit status 2 (235.939806ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-355668 -n default-k8s-diff-port-355668
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-355668 -n default-k8s-diff-port-355668: exit status 2 (235.841217ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-355668 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-355668 -n default-k8s-diff-port-355668
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-355668 -n default-k8s-diff-port-355668
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.70s)

                                                
                                    

Test skip (39/326)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.2/cached-images 0
15 TestDownloadOnly/v1.31.2/binaries 0
16 TestDownloadOnly/v1.31.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.28
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
255 TestNetworkPlugins/group/kubenet 2.83
265 TestNetworkPlugins/group/cilium 3.07
281 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649719 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-918860 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-918860

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-918860

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-918860

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-918860

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-918860

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-918860

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-918860

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-918860

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-918860

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-918860

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-918860

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-918860" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-918860" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-918860

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-918860"

                                                
                                                
----------------------- debugLogs end: kubenet-918860 [took: 2.68679625s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-918860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-918860
--- SKIP: TestNetworkPlugins/group/kubenet (2.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-918860 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-918860

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-918860

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-918860

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-918860

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-918860

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-918860

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-918860

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-918860

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-918860

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-918860

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-918860

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-918860" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-918860

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-918860

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-918860

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-918860

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-918860" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-918860" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-918860

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-918860" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-918860"

                                                
                                                
----------------------- debugLogs end: cilium-918860 [took: 2.935210125s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-918860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-918860
--- SKIP: TestNetworkPlugins/group/cilium (3.07s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-378882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-378882
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard